As a working scientist, intuitively, both these factors (institutional conservatism, spurious certainty) seem like potential brakes on the rate of progress to me because I can ground them in personal experience. There are a few other factors that are claimed to be brakes on scientific progress relative to early in the 20th century, for example the low-hanging-fruit effect —https://worksinprogress.co/issue/scientific-slowdown-is-not-inevitable/ — and the related problem of there just being more stuff to learn before we can be at the forefront of our fields, and so have less productive time in a research career.
Would I be correct in supposing that you are emphasising these two particular factors (institutions, certainty) because they are the ones we might be able to influence?
It's partly as they are ones we can influence. But also partly as I think they exist because, institutionally, we have adopted the wrong mindset around how knowledge development and production works. Ironically perhaps, we seem to have acted as though the philosophers and sociologists of science in the 60s and 70s like Kuhn and Feyerabend were giving a blueprint to adopt, rather than a critique of where not to go.
Right, peer assessment of grant proposals assessing "merit" with "does this fit the dominant paradigm"? And "will this have a high chance of reliably achieving incremental progress?" I feel that both these things are potentially cultural values that we can shift and/or skills that can be learned/unlearned. But it is not worth my while, as a researcher, to attempt to shift culture or change skills while funding is assessed by others who also wish to have low-variance, dependable progress. Then, this feels like a collective action problem: how do we all coordinate on shifting the norm together? I have a feeling that I should probably, at this juncture, read ahead to the posts that you have written more recently to see what progress you have made on that question.
Neat analysis of a very important issue. I wonder whether there is a question that sits behind the scenes - why do we seek to create a perception of certainty when the evidence suggests it does not exist? The history of knowledge suggests that shifts of understanding are common, and that reliable and enduring 'truths' are relatively rare. Yet as a society we continue to create a level of false certainty around what we know.
The concept of disruption is an interesting one. My sense is that the term covers a few concepts. One is contest (the extent to which an existing line of thought is questioned). Another is novelty (the extent to which a new line of thought differs from the existing thinking). A third is impact (the effect the new thinking has on both understanding and, potentially, practice in society). There is, I suspect, an implicit assumption that novelty is a major driver of "the productivity of science" which bears more testing.
Why do we seek to create certainty? One useful answer is a religious one: the original sin of humans is pride. Another is to go back to my first article on Algorithms and Hartmut Rosa's work on Controllability. For him, the quest for control and certainty is fundamental to the myths or worldview of our Western culture. Or maybe we just don't like the idea of not knowing?
The western culture point is interesting. There is certainly a difference in the traditions of western and other models of thinking here. But I suspect it does not explain lived reality well enough - as what we see is a global phenomenon. Another dimension may be a response to the realities of large-scale societies where people are taking decisions which affect others. The need to convince others that our decisions/actions are for the best leads us to present things as more certain than they really are. My feeling is that this may create (or at least add to) a bias towards confidence / certainty
I think you get close to the kernel predicament, perhaps paradox, facing the recent rise of industrial civilisation and its knowledge base. Where is proof? Is there a truth compass? Can the human mind understand itself? Science has a way of running into paradox. As a mono-project can it justify itself? There is a tendency to regard technology, mechanism, invention as sufficient proof of a grasp of reality, and this includes the beliefs pinned on the recent mechanisation of aspects of intelligence. Arguably the knowledge base that allowed industrialisation to create an unsustainable mess needs more 'science' to solve the predicament? Hmm ... the 'break through’ always remains just around the corner?
A long time ago I was a bit-part player at national level UK in risk assessment involving genetic engineering. The inability of many scientists to think properly about 'risk' was very evident, especially at 'pundit' level, and depended on attitudes.
This is the world my children, grandchildren will inhabit, and it is way beyond me technically, but I find a recent approach (book) by highly-qualified Erica Thompson looks as if she treads the same ground pertinent to risks ... quote
"Escape from Model Land” observes: “If we are serious about addressing lack of confidence in science, it is necessary for those who currently make their living from and have built their reputation on their models to stop trying to push their version of reality on others.”
Nice quote and in my mind it captures the problem when 'normal science' is dominant: science becomes self-referential and focused on assessing against to a version of reality, rather than focused on the world out there. Or to update Mark Twain: There are four kinds of lies: lies, damned lies, statistics and models.
More from Erica Thompson and discussion of epistemic humility. (I mentioned in my own retweet a guy called 'Humble Knowledge'. Smile.) Your point gains traction. See her excellent book and recent podcasts. @H4wkm0th They use the term.
Quote Tweet by David Roberts,, "so much of this discussion on epistemic humility ..... the need for diversity, performativity and conviction narratives chimes with my own take on Perspectival realism (not just American pragmatism)"
Thanks. And if you haven't found it, I've started sharing my posts directly on Twitter: @HumbleKnow My Twitter game isn't very strong but they are now easier to share.
As a working scientist, intuitively, both these factors (institutional conservatism, spurious certainty) seem like potential brakes on the rate of progress to me because I can ground them in personal experience. There are a few other factors that are claimed to be brakes on scientific progress relative to early in the 20th century, for example the low-hanging-fruit effect —https://worksinprogress.co/issue/scientific-slowdown-is-not-inevitable/ — and the related problem of there just being more stuff to learn before we can be at the forefront of our fields, and so have less productive time in a research career.
Would I be correct in supposing that you are emphasising these two particular factors (institutions, certainty) because they are the ones we might be able to influence?
It's partly as they are ones we can influence. But also partly as I think they exist because, institutionally, we have adopted the wrong mindset around how knowledge development and production works. Ironically perhaps, we seem to have acted as though the philosophers and sociologists of science in the 60s and 70s like Kuhn and Feyerabend were giving a blueprint to adopt, rather than a critique of where not to go.
Right, peer assessment of grant proposals assessing "merit" with "does this fit the dominant paradigm"? And "will this have a high chance of reliably achieving incremental progress?" I feel that both these things are potentially cultural values that we can shift and/or skills that can be learned/unlearned. But it is not worth my while, as a researcher, to attempt to shift culture or change skills while funding is assessed by others who also wish to have low-variance, dependable progress. Then, this feels like a collective action problem: how do we all coordinate on shifting the norm together? I have a feeling that I should probably, at this juncture, read ahead to the posts that you have written more recently to see what progress you have made on that question.
Got it, thanks.
Phil H
Neat analysis of a very important issue. I wonder whether there is a question that sits behind the scenes - why do we seek to create a perception of certainty when the evidence suggests it does not exist? The history of knowledge suggests that shifts of understanding are common, and that reliable and enduring 'truths' are relatively rare. Yet as a society we continue to create a level of false certainty around what we know.
The concept of disruption is an interesting one. My sense is that the term covers a few concepts. One is contest (the extent to which an existing line of thought is questioned). Another is novelty (the extent to which a new line of thought differs from the existing thinking). A third is impact (the effect the new thinking has on both understanding and, potentially, practice in society). There is, I suspect, an implicit assumption that novelty is a major driver of "the productivity of science" which bears more testing.
Why do we seek to create certainty? One useful answer is a religious one: the original sin of humans is pride. Another is to go back to my first article on Algorithms and Hartmut Rosa's work on Controllability. For him, the quest for control and certainty is fundamental to the myths or worldview of our Western culture. Or maybe we just don't like the idea of not knowing?
The western culture point is interesting. There is certainly a difference in the traditions of western and other models of thinking here. But I suspect it does not explain lived reality well enough - as what we see is a global phenomenon. Another dimension may be a response to the realities of large-scale societies where people are taking decisions which affect others. The need to convince others that our decisions/actions are for the best leads us to present things as more certain than they really are. My feeling is that this may create (or at least add to) a bias towards confidence / certainty
I think you get close to the kernel predicament, perhaps paradox, facing the recent rise of industrial civilisation and its knowledge base. Where is proof? Is there a truth compass? Can the human mind understand itself? Science has a way of running into paradox. As a mono-project can it justify itself? There is a tendency to regard technology, mechanism, invention as sufficient proof of a grasp of reality, and this includes the beliefs pinned on the recent mechanisation of aspects of intelligence. Arguably the knowledge base that allowed industrialisation to create an unsustainable mess needs more 'science' to solve the predicament? Hmm ... the 'break through’ always remains just around the corner?
A long time ago I was a bit-part player at national level UK in risk assessment involving genetic engineering. The inability of many scientists to think properly about 'risk' was very evident, especially at 'pundit' level, and depended on attitudes.
This is the world my children, grandchildren will inhabit, and it is way beyond me technically, but I find a recent approach (book) by highly-qualified Erica Thompson looks as if she treads the same ground pertinent to risks ... quote
"Escape from Model Land” observes: “If we are serious about addressing lack of confidence in science, it is necessary for those who currently make their living from and have built their reputation on their models to stop trying to push their version of reality on others.”
Nice quote and in my mind it captures the problem when 'normal science' is dominant: science becomes self-referential and focused on assessing against to a version of reality, rather than focused on the world out there. Or to update Mark Twain: There are four kinds of lies: lies, damned lies, statistics and models.
More from Erica Thompson and discussion of epistemic humility. (I mentioned in my own retweet a guy called 'Humble Knowledge'. Smile.) Your point gains traction. See her excellent book and recent podcasts. @H4wkm0th They use the term.
Quote Tweet by David Roberts,, "so much of this discussion on epistemic humility ..... the need for diversity, performativity and conviction narratives chimes with my own take on Perspectival realism (not just American pragmatism)"
Thanks. And if you haven't found it, I've started sharing my posts directly on Twitter: @HumbleKnow My Twitter game isn't very strong but they are now easier to share.
Have tried google but can't find you. Can you give a link?
This should work: https://twitter.com/humbleknow