AI Models Are Getting Credit in Scientific Papers, Study Finds
Study of 20 million PubMed abstracts shows scientists increasingly credit AI models over themselves, raising questions of responsibility.

TL;DR
Researchers are shifting language from “we found” to “the model found,” a trend that began in 2016 and accelerated after 2018 and 2021.
Context
Computational biologist Nic Fisk noticed a subtle but growing change in how scientists describe their work. Papers that once highlighted human insight now often credit algorithms. To move beyond anecdote, Fisk examined millions of abstracts from PubMed, the biomedical literature database.
Key Facts
- Fisk’s team processed over 20 million abstracts, tracking mentions of researchers versus computational tools. - The analysis shows a clear rise in phrasing that assigns agency to models, starting around 2016 and picking up speed after 2018 and again after 2021. - The shift is linked to the rapid adoption of machine‑learning and AI methods, not merely to an increase in passive‑voice constructions. - Fisk emphasizes that the change does not undermine scientific rigor; it reshapes how responsibility is expressed in research narratives.
What It Means
Attributing discoveries to a model can blur the line between human judgment and algorithmic output. If a model is described as the agent, accountability for errors or biases may drift away from the researcher. Fisk warns that agency and responsibility are intertwined: without control, a system cannot bear responsibility. The linguistic trend therefore raises ethical questions about who is answerable for conclusions drawn from AI‑driven analyses.
The pattern mirrors broader shifts in other fields. Fisk’s earlier work on education research revealed how language frames subjects such as disability, influencing perception and policy. In medicine, similar metaphor choices affect treatment strategies, suggesting that the words scientists use can shape real‑world outcomes.
As AI tools become standard in labs, clear conventions for attributing credit and responsibility will be essential. Interdisciplinary dialogue—bringing philosophers, ethicists, and technologists together—could help define norms that preserve scientific accountability while embracing computational power.
What to watch next
Follow upcoming guidelines from major journals and funding agencies on AI attribution, and watch for empirical studies measuring how these language shifts affect reproducibility and public trust.
Continue reading
More in this thread
Conversation
Reader notes
Loading comments...