Triggered lots of thoughts. Your description of 'the' scientific method embodying epistemic humility is compelling.
My feeling is that application of the method in the real world often feels a long way from the core you described. Let me test.
Creating confidence in the results of an experiment relies on a creating a state of 'controllability' which is achieved via abstraction and residualisation of the real world. In a sense, controllability is a pre-condition for the replicability of result that provide a base for increased certainty. Without replicability, the scientific method falls down. But even with replicability, you have a potential abstraction and residualisation problem, which may invalidate the results.
I am also conscious there been an increasing tendency for 'experts' to claim the mantle of science for the basis of their work. This is particularly prevalent in universities, but is also true more broadly. Here, the scientific method becomes testing hypotheses against uncontrollable, non-replicable real world data. Replicability is removed as a source of certainty. Abstraction and residualisation remain, to some degree at least, depending on the nature of the experiment but is a lessened form.
While we can argue whether this type of analysis involves the scientific method or not, the reality is that this type of approach dominates the creation of 'evidence' used to base real world 'human' decisions. Confidence in the results of this type of activity is generally based on related notions of representativeness and mathematical theories of statistical validity as well as 'trust' in the expertise of those undertaking the hypothesis making.
I imagine that this could be argued to be a form of limited rationality or even a merged science/rationality method (one could argue that such analysis is simply the hypothesis making step of the scientific method, for example). But it strikes me as a form distinct of 'truth seeking' which is worthy of its own categorisation and consideration.
I agree that real world application is a long way from the core. That is partly why I started with two opposing views of science and I think the first is more common. At the crudest, the thinking runs something like (to draw on the Neil DeGrasse Tyson quote in the article): "Science discovers objective truths. I am a scientist. Therefore what I tell you are objective truths."
I've even heard a DVC of a prominent university say pretty much exactly that in a public forum.
There are substantial arguments that real world application has always differed from the core. Thomas Kuhn's description of scientific practice as a series of established orthodoxies that are periodically overthrown is one. In the end, it probably the long-term social norms across fields and the broader scientific endeavour that matter most, rather than the attitudes of individual scientists.
One final point in this meandering response. I think the core logic of the scientific method holds even when we are dealing with non-replicable real world data. One change coming out of the 'replication crisis' in various fields over the past 15 years has been a push for scientists to pre-register their study before they complete it. That means they identify hypotheses and methods before they start looking at data. When scientists start trawling data to find results, the experience is that you get all sorts of spurious statistical artefacts. That is, the logic that works is to form a hypothesis and then test it against the real world (whether through experiments or by examining existing data).
Triggered lots of thoughts. Your description of 'the' scientific method embodying epistemic humility is compelling.
My feeling is that application of the method in the real world often feels a long way from the core you described. Let me test.
Creating confidence in the results of an experiment relies on a creating a state of 'controllability' which is achieved via abstraction and residualisation of the real world. In a sense, controllability is a pre-condition for the replicability of result that provide a base for increased certainty. Without replicability, the scientific method falls down. But even with replicability, you have a potential abstraction and residualisation problem, which may invalidate the results.
I am also conscious there been an increasing tendency for 'experts' to claim the mantle of science for the basis of their work. This is particularly prevalent in universities, but is also true more broadly. Here, the scientific method becomes testing hypotheses against uncontrollable, non-replicable real world data. Replicability is removed as a source of certainty. Abstraction and residualisation remain, to some degree at least, depending on the nature of the experiment but is a lessened form.
While we can argue whether this type of analysis involves the scientific method or not, the reality is that this type of approach dominates the creation of 'evidence' used to base real world 'human' decisions. Confidence in the results of this type of activity is generally based on related notions of representativeness and mathematical theories of statistical validity as well as 'trust' in the expertise of those undertaking the hypothesis making.
I imagine that this could be argued to be a form of limited rationality or even a merged science/rationality method (one could argue that such analysis is simply the hypothesis making step of the scientific method, for example). But it strikes me as a form distinct of 'truth seeking' which is worthy of its own categorisation and consideration.
A few partial responses to a great comment:
I agree that real world application is a long way from the core. That is partly why I started with two opposing views of science and I think the first is more common. At the crudest, the thinking runs something like (to draw on the Neil DeGrasse Tyson quote in the article): "Science discovers objective truths. I am a scientist. Therefore what I tell you are objective truths."
I've even heard a DVC of a prominent university say pretty much exactly that in a public forum.
There are substantial arguments that real world application has always differed from the core. Thomas Kuhn's description of scientific practice as a series of established orthodoxies that are periodically overthrown is one. In the end, it probably the long-term social norms across fields and the broader scientific endeavour that matter most, rather than the attitudes of individual scientists.
One final point in this meandering response. I think the core logic of the scientific method holds even when we are dealing with non-replicable real world data. One change coming out of the 'replication crisis' in various fields over the past 15 years has been a push for scientists to pre-register their study before they complete it. That means they identify hypotheses and methods before they start looking at data. When scientists start trawling data to find results, the experience is that you get all sorts of spurious statistical artefacts. That is, the logic that works is to form a hypothesis and then test it against the real world (whether through experiments or by examining existing data).
Interesting. One for further discussion.