Today, practical epistemic humility, whether it is named as such or not, is most often discussed in the context of cognitive biases and other limitations in human thinking skills. These biases have been investigated in considerable by psychologists, and this work has formed the basis of fields like behavioural economics. They were most famously popularised by the work of Daniel Kahnemann (drawing on his own work and others’) in Thinking Fast, Thinking Slow. But any concept that is the subject of many online listicles and infographics has clearly gone mainstream.
However, we should also always be careful with concepts and results like this because, as is common with human science, we have a tendency to overplay both the novelty and strength of our results. The interactions between epistemic humility and cognitive biases aren’t as straightforward as they might seem. Before we get to that, we should first introduce cognitive biases for those who haven’t come across them before.
Introduction to Cognitive Biases
The core psychological insight that is expressed by the concept of ‘cognitive biases’ is that we, as humans, have in-built patterns of thinking or reacting to information that can easily lead to illogical or poor decisions, especially in terms of genuinely understanding reality. The conclusion is that these are deeply ingrained, or hard-wired, so that they often affect us even when we are trying to avoid them. A few common examples should be sufficient to demonstrate the point, however, psychologists have identified hundreds (at least if Wikipedia is to be trusted).
A simple example is confirmation bias. This captures the way that we, as humans, typically find information that confirms what we already believe to be more compelling than it should be, and to discount the validity of information that goes against what we believe.
You can easily observe it in yourself. Pick a controversial topic where you have a fairly clear view, and find a few random articles, blog posts or tweets on the topic. Instinctively, you will most likely find the ones that support your position to be better argued and more compelling than the others, even if from a neutral perspective the arguments and points are equally good (or bad).
Another example is availability bias. That describes the way we give greater weight to evidence or situations that we can remember easily or is readily available to us. Similarly important information that isn't front of mind or easily accessible gets discounted, not because it isn’t important, but just because it isn’t front of mind.
This occurs when, for example, we think there is a dramatic rise in knife crime or racism because we have seen more newspaper headlines or social media posts on the topic recently. The background situation may not have changed but the availability of information has shifted and so we think the world has changed.
While 'cognitive bias' is a term developed, extensively refined and tested by modern psychology, it is important to note it is not a new idea. People have been writing about weaknesses in human thinking for a very long time. Francis Bacon in the early 1600s, for example, provided an eloquent description of confirmation bias:
The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable in itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside and rejects; in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.1
If humans have systematic tendencies in the way we think and reason that regularly give rise to flawed conclusions, then it should not be a surprise that knowledge is difficult to acquire, even with the best of care and attention. In other words, the existence of cognitive biases provides evidence for epistemic humility. However, cognitive biases are often treated as evidence for a stronger conclusion.
Are humans bad at thinking?
If our thinking is riddled with cognitive biases, and most of our efforts are (due to confirmation bias and other tendencies) focused on justifying what we already think, why should we treat human rationality as a reliable guide to anything? Surely cognitive biases demonstrate that humans are flawed thinkers and we should replace human thought, if we can, with something better? Won’t completely rational AI systems make better and more reliable decisions than humans?
This is a line of thinking that is easy to find, not just in the echo chambers of Silicon Valley. However, a longer perusal of the types of biases that have been identified should give us pause, especially as there is a group of egocentric and overconfidence biases that described how we habitually over-rate the strength of our insights and conclusions. Are many people too confident in their conclusion that human thinking is flawed and of poor quality? There are a number of reasons to think they might be.
For a start, humans are (short of a deity) the most capable thinking entity or organism we have discovered in the universe. Various animals have practical thinking and problem solving abilities, but none has the capacity for abstract thought and communication that humans do. Various uses of computing and AI are capable of certain types of thought or problem solving, say playing chess or folding proteins, but these are highly specialised. In any case, they have been constructed via human thought.
So what are we comparing humans to when we conclude they are poor thinkers? We can only be comparing them to an abstract ideal of a rational thinker that doesn't exist. To use Wikipedia as the expression of conventional wisdom:
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment.
Given that cognitive biases affect everyone, this 'norm' is clearly a normative judgement about how we should think as opposed to any normal description of how we do think. Yet is this norm something that can actually exist?
Kahnemann’s work offers a interesting reason for why the idealised rational thinker may not possible in the real world. He distinguishes two styles of thinking that all humans switch between: System 1 and System 2. System 1 thinking is fast, efficient and built on longstanding habits and associations. It is accurate for most situations that we normally experience. System 2 thinking, on the other hand, involves a slow, deliberate and careful thought - and is required to tackle difficult or different problems. It is more likely to be accurate, but doesn’t by itself avoid all of the biases. This video is a nice introduction if you haven't come across the concept before.
A key point is that we operate most of the time in a System 1 mode because System 2 thinking is energy intensive and draining. We cannot keep it up for very long. This shows there are real world trade-offs between the energy required for intensive and accurate information processing and the ability of a system or organism to survive and produce enough energy.
This suggests that the energy requirements for the idealised rational thinker to function in the real world may be prohibitive. Our experiences creating various information processing and decision-making machines add some evidence to this. Capturing and analysing video that is equivalent to our routine visual experiences of the world is highly data and energy intensive. Both the number of sensors and the computing power required for autonomous vehicles is surprisingly high. And these are for single application, of information processing and thinking, compared to the highly flexible information capabilities that humans have.
Does our ideal of rationality really work?
While we have reasons to suspect that an idealised rationality is not possible in our energy constrained world, there are also reasons why it isn’t such the ideal people take it to be. For a start, in the article on emotions, we covered the case of people with damage to their ventromedial prefrontal cortex. These people are hyper-rational in their decision-making along the lines our ideal would suggest, and incapable of actually making a decision.
A second point is that many psychologists argue that they play important and positive roles in human life. Kahnemann’s approach posits that they are useful, energy saving shortcuts to help us navigate human life. Others have argued, for example, confirmation bias helps us establish social connections.2
A third issue is that the hyper-rational ideal that is taken as the comparison leads, in some cases, to poor or odd decisions. The ideal doesn’t always work, which suggests it isn’t quite as ideal as we think. We will look one example in the context of a particular cognitive bias: the sunk cost fallacy.
This bias captures our tendency to think that previous investments require us to continue to invest in something beyond the point in which it makes sense to do so. It is the mindset that has us thinking that “I've already invested millions in this business, I can't step away now” - even if the business is clearly a lost cause.
Analysis of why this approach is a fallacy typically runs along these lines: the millions have already been spent and are gone, so spending more money should (always) be a new, point-in-time forward looking decision, not one in defense of past decisions. Obviously the logic holds well beyond money decisions, we can continue chasing a career, degree or a partner, well beyond a sensible point because we have already put so much in and can’t step away now.
However, this rational logic can go awry in real life situations. Imagine that you have spent thousands of dollars on a holiday but, when the time comes, the timing is now inconvenient, the weather forecast is questionable and you are going to have to spend a little bit more money than you expected. Given the initial money has already been spent, this logic would tell you to ignore it and weigh up whether the inconvenience is now worth it. Given the past spending is now irrelevant, you might very well just not go.
Few people would be comfortable with this decision, however rational and logical the argument. Instead we are compelled to back in our past investment and go through with the holiday. Importantly, there are good reasons for this. No decision like this is a once off, consequence free decision. Instead it changes how we relate to others and to ourselves over time - in this case how we think about spending money on future holidays. Would you be willing to spend money on a future holiday if you don’t trust that your future self will actually take the holiday?
While this survey is not exhaustive, it demonstrates that our typical ideal of rationality may not be so ideal in real life. It can sometimes lead to odd decisions and it isn’t clear it can actually exist in the real world.
Humble about our biases and our knowledge
As noted before, the existence of systematic cognitive biases offers a simple argument for epistemic humility: our thinking processes are flawed and therefore it is natural that knowledge is hard. However, this argument is a little too simple. We don’t have a real world example of a better set of thinking processes to judge ours against, so we test it against an abstract ideal that may not be possible. In any case, the abstract ideal tends to reflect a hyper-rational actor concerned with mathematically balancing interests at a specific point in time. It reads more like the rationality of an idealised economic entity in a market, rather than a human in a social environment.
A number of factors we have already covered here suggest that we can correct the simple argument by reversing the direction of explanation. Knowledge is hard as what we know is only a collection of partial sketches of the world and accessing knowledge requires the cooperation of lots of parts of our selves: thinking, emotions, bodies and actions. Given we are working with clusters of overlapping abilities working on partial sketches of reality under energy constraints, our thought processes can never be comprehensive and complete. Instead, we have to rely on a variety of patterns, shortcuts or heuristics to make sense of and operate within vast amounts of partial information. In this context, cognitive biases are entirely unsurprising.
Knowledge isn't hard because we are afflicted by cognitive biases and our thinking is flawed. Instead, because knowledge is hard, we have to rely on heuristics and rules of thumb to achieve knowledge in the real world. Given our constraints, these heuristics unsurprisingly turn into biases when they are misapplied.
Francis Bacon, Novum Organum, 1620. Section XLVI. Text online at https://constitution.org/2-Authors/bacon/nov_org.htm
Dardenne B, Leyens JP (1995). "Confirmation Bias as a Social Skill". Personality and Social Psychology Bulletin. 21 (11): 1229–1239. doi:10.1177/01461672952111011.
One of your best. Loved the use of autonomous vehicles as an example. One big question for me involves understanding when the misapplication of heuristics matters and when it does not, both for an individual and for society as a whole. A bias may well lead us to a second (3rd or 4th) best conclusion, but it does lead to a conclusion. At a practical, for the individual, the question becomes is the time and effort saved through my bias outweighed by the variance between the benefits of the biases outcome and the rationale one. For society, the question is whether these second best outcomes have negative knock on effects for others that we should be trying to avoid. Epistemic humility opens a door to these questions (by encouraging us to understand that our view of the world may be wrong) but does it help us understand where this potential for 'wrongness' matters enough for some form of intervention to occur.