Are our research practices holding back scientific progress?
Reflecting on a new paper in Nature
There are many signs that there is something wrong with modern science, innovation and research. We are no longer matching the progress previous generations achieved and there are an increasing number of crises and worries. Replication crises, declining innovation and slow productivity growth are some of the better known issues. A new paper has just been published in Nature that adds to this body of evidence. By analysing patterns of academic citations, the paper argues that the pace of genuinely new discoveries has slowed, or as the title puts it: Papers and patents are becoming less disruptive over time.
Their data shows that the average disruptiveness of research papers - a proxy for new discoveries or breakthroughs - has declined fairly consistently since the 1940s. This points to a significant slowdown in the (relative) rate of scientific discovery. Interestingly, the number of disruptive discoveries remained fairly constant through the time period, while the amount of published research grew significantly.
While one paper does not demonstrate a result, it is consistent with a range of other literature, so likely describes something real. In the paper, the authors canvass a number of options for why this slowdown might be occurring:
Some point to a dearth of ‘low-hanging fruit’ as the readily available productivity-enhancing innovations have already been made. Others emphasize the increasing burden of knowledge; scientists and inventors require ever more training to reach the frontiers of their fields, leaving less time to push those frontiers forward."
However, they rule these out as unsupported by the evidence. The only explanation the authors advance is as follows:
We attribute this trend in part to scientists’ and inventors’ reliance on a narrower set of existing knowledge…..Relying on narrower slices of knowledge benefits individual careers, but not scientific progress more generally.
While this is plausible, and fits with the ongoing specialisation of academic research, it seems rather weak. There are at least two more explanations worth considering.
The hegemony of existing research programs
To start with a provocative argument, in a Substack post published before the Nature article,
pins much of the blame on the rise and institutionalisation of peer review since the 1940s. His take is persuasively argued and worth a read. However, while there are clear issues with peer review, I see it more as a symptom of a broader issue than the root cause.To understand the broader issue, we need to be familiar with key themes from a few philosophers of science, particularly Thomas Kuhn, Imre Lakatos and Paul Feyeraband. To pull out key themes from a broad school of thought, these thinkers looked at history and argued that science operates via groups of scientists working strictly within particular paradigms or research programs - and in competition with other groups and different paradigms. These different groups almost never question their own paradigm or program and scientific progress depends on which program wins most adherents over time. The famous physicist Max Planck described a version of this view:
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.
Whether you agree with this as a philosophy of science, it rings true as a sociology of scientists. Scientists are not uniquely disinterested rational knowers but work in the same ways as humans do in all other social and professional environments. Self-interest, factions and group psychology all play a strong role. And the competition between different research programs is not only carried out in academic journals, but pursued through fights over funding, control of appointments and - at times - personal attacks and character defamation.
But if this is how scientists have always operated, why is there to a decline in scientific output from the 1940s onwards? The key is the way funding and governance of research changed dramatically after the Second World War. Governments around the world dramatically increased their support of scientific research and established large grant and oversight agencies.
This lead to the growth of peer review, the dependence on committees of existing experts for funding decisions, and the regularisation of metrics across national and international research systems. While there are many benefits, it also created an environment ideally suited to the expansion and entrenchment of established research paradigms and programs. This environment privileges existing ways of thinking and provides a strong disincentive for new or disruptive ideas. This plausibly explains a slowdown in research productivity from the 1940s. There is significant evidence that this type of, often well-motivated, gate-keeping has occurred across many different fields, including theoretical physics.
Epistemic certainty undermines scientific progress
These sociological changes in science are complemented by a second plausible explanation for the decline in the rate of new scientific breakthroughs. Since the second half of the twentieth century, Western culture, especially in academic and scientific circles, has been characterised by a growing epistemic certainty. While the history is nuanced, and there are different sources of it, this is the sentiment behind popular hashtags like #TrustScience or #BelievetheScience.
But there is a major problem with this: over time it undermines our ability to make new scientific discoveries. “Believe the Science” requires us to trust that what scientists tell us today is correct. This inclines us to distrust any new scientific theory that comes along, as it will necessarily contradict the scientists that we trust. When this attitude is widespread, it can only be a disincentive towards developing disruptive scientific ideas and a brake on the acceptance of new ideas.
Put differently, if we are certain (or merely very confident) that our knowledge is reliable and correct, then we won't be interested in pursuing new ideas or questioning existing ones. We are confident we have the truth and any other ideas are therefore suspect. If this attitude dominates culturally, then we will have fewer scientists interested in challenging the status quo and disruptive science will be less common - as the evidence shows has occurred since the 1940s.
By contrast, in an era of epistemic uncertainty, questioning and wonder, there will be a stronger incentive for people to question, try out new ideas and pursue counter-intuitive hunches. Not only does the scientific method rely on an attitude of epistemic humility, but humility breeds a cultural environment in which scientific progress is more easily made.
Got it, thanks.
Phil H
Neat analysis of a very important issue. I wonder whether there is a question that sits behind the scenes - why do we seek to create a perception of certainty when the evidence suggests it does not exist? The history of knowledge suggests that shifts of understanding are common, and that reliable and enduring 'truths' are relatively rare. Yet as a society we continue to create a level of false certainty around what we know.
The concept of disruption is an interesting one. My sense is that the term covers a few concepts. One is contest (the extent to which an existing line of thought is questioned). Another is novelty (the extent to which a new line of thought differs from the existing thinking). A third is impact (the effect the new thinking has on both understanding and, potentially, practice in society). There is, I suspect, an implicit assumption that novelty is a major driver of "the productivity of science" which bears more testing.