We don't need certainty before we act
In fact, not being sure is often important for the decisions we make.

So far, most of my writing here has suffered from a typical philosopher's conceit. It has been focused on an individual person, operating largely by themself and mostly preoccupied with knowledge rather than living. While this might describe the weird tribe known as philosophers, it has limited value for thinking through knowledge and truth for real people doing real things in the real world. For this reason, one of my aims for the next year will be to explore more practical issues in some depth and think more about knowledge in the context of ordinary, social life.
There is a lot to this, but a useful starting point is to look again at the connection between knowledge and human action. While I have touched on this previously, there is a lot more to the story. In particular, it has become increasingly clear to me there is a common, but unhelpful, view that we can draw a direct, and straight, line between what someone believes is true and how they will act. It comes up, for example, in lots of discussions around misinformation and fact-checking. This post will begin to flesh out a more realistic account, based on a series of observations.
It’s not black and white
The most obvious problem with assuming there is a straight line between beliefs and actions is that humans are messy and complex beings with conflicting social, psychological, practical and theoretical pressures. However, it isn’t just the messiness of life that undermines this view, but problems arise from the way we hold very ordinary beliefs.
Imagine I'm hungry and there are some apples sitting in the fruit bowl that I know are good to eat. This, however, doesn't exactly determine what I do, even if I like apples and feel like eating one. Some options include:
I pick up an apple, inspect it and give it a sniff before taking a bite.
I grab an apple and cut it into pieces to double-check for worms, before starting to eat.
I pick up an apple and start eating it.
What I know about the apples can translate to a range of different actions, depending on a number of broader factors. The notable point here is that people might believe or know a fact, but they often vary in how much they trust that fact.
Within the field of philosophy, there is a useful concept that describes this dynamic. Instead of focusing on the black and white situation where we either believe something or we don’t, some philosophers talk instead about our credence in a statement, which is simply a measure of how strongly we believe the statement. It is often rated on a scale of 0% (complete disbelief) to 100% (complete certainty), but the key insight is that often we are somewhat confident in a statement (or theory).1 In the example above, I would have a different credence in the belief that the apples are good to eat in the different cases.2
Levels of trust matter
Noticing that we can have different credences in our beliefs and what we know explains numerous differences between how people behave. Two people might both know that flying in a commercial plane is safer that driving in car, but that doesn’t stop one of them being nervous and not wanting to fly. How much they trust what they know varies.
It also explains features of how we make decisions. My observation is that we typically make decisions based both on what we know and on how much we trust that knowledge. Where we don’t trust something fully, we will often weigh up the risks of it being wrong against the likely consequences. So, in the case above, if I’ve had a bad experience with an apple recently, I’ll cut the apple up just to be sure even if there is no indication it will be bad. I suspect we rarely make a decision just based on the facts, but almost always factor in an assessment of how much we trust those facts.3
This connects to a further observation. The level of credence, or trust, required for actions varies according to the situation and context. For example, I would say I know that my car will get me to work in the morning and don’t worry about it, even though I am not 100% certain. However, if I decide to go on a long road trip, I'm likely to get the car checked over to increase my trust in the car to a higher level than I need to just get to work.
As a simplistic summary, if I am going to act on a belief, and all the consequences are reversible or not much bad can happen, then the credence required for me to act on it is likely not very high. If, on the other hand, my life depends on a belief or I'm going to lose social status or credibility over a belief, then the required credence is much higher.4
There is one final observation that covers a dynamic that is not clearly captured in philosophical discussions of credence. In economics, it is well understood that there is often a difference between between someone's stated and revealed preferences. That is, there is a difference between what they say they would prefer and what choices they actually make.
The same distinction applies when we consider credences. There can be a notable difference between how strongly someone claims to (or thinks they) believe something versus the beliefs they act on. The public figure who promotes public schooling but then sends their own children to a private school is a good example. As is the religious person who makes no different life choices to anyone else.
A useful starting point
These observations about the role of credence in decisions and actions show some ways in which beliefs and knowledge do not lead in a straight line to actions. It explains why, for example, there are typically thousands of people who hold a particular extreme ideology (or crazy conspiracy theory) for every one who commits an act of terrorism (or takes a gun to a pizza restaurant). This has a range of consequences for many issues, including decision making.
It is particularly relevant to the various debates around misinformation and fact-checking. If, as I think is likely, most crazy information being shared online is being believed with low credence, then people do not trust it enough to act on it and it is largely harmless. It also opens up new strategies for influencing people (for good or bad) if we start thinking that the aim is to increase or reduce people's credence in certain beliefs, rather than trying to prove or refute their beliefs.
As such, thinking through how much we trust information alongside what we know is a promising explanation for thinking about the connections between knowledge and action. There is a lot more to explore in later posts.
As a final point, it is worth noting that the concept of credence, especially our credence in certain beliefs varying with the context, is entirely consistent with the broader account of knowledge I have been describing. For example, I have been emphasising that the basic building blocks of knowledge are theories (or models or narratives or similar), not facts. Yet theories don't map onto the world in a simple yes/no way, as they are always better or worse explanations of some aspect of reality.
This goes for everything from quantum mechanics (which presumably isn't universally true as it conflicts with General Relativity) to my theory about why my friends broke up last week. Thus, for theories, it makes sense to trust them more or less in particular contexts than to decide they are universally true or false. In other words, it is natural to assign a credence to a theory. As theories underpin all of our knowledge, it is therefore natural that we assign credences to all our knowledge.
In my view, the numerical scale should be taken as an illustration of the concept of a credence, but many take it more seriously and want to assign values to people holding particular beliefs.
To make a technical observation, assigning a credence isn’t the same as stating your confidence that a statement is true - as Bayesian approaches to probability emphasise. To pick one example, my credence in a statement can be be, say 70%, because I think it explains 70% of the situation, rather than because I am 70% confident that the statement is (completely) true.
If I’m correct here, there are some important consequences to think through - especially for group, corporate and government decisions. In my experience, the norm is to assume that we can find the facts and then take those as definitely true. Adding a credence, or trust weighting, to information should change the way people make decisions.
This question about what levels of credence are needed for what contexts is important, and needs further investigation. It is also highly relevant for group or government decision making. Common complaints about the level of risk aversion in decision making might be better explained as an expectation of a higher credence in information than is possible. This is all well beyond the scope of this post but I hope to return to it.
Looking forward to your exploration of these issues.
I have often wondered about the aircraft example. Sometimes, I have thought it might reflect what you could call destiny-control bias. This bias means that we tend to give more credence to things that leave us in control of our destiny than things that do not (driving a car v being a passenger in an aircraft piloted by someone you have never met).
I wonder also whether you need to consider risk tolerance as an external variable affecting 'expressed' credence. In your apple example, the different decisions could be explained by how willing a person is to suffer negative consequences (the apple has a worm or tastes bad) rather than the inherent credence they give to the apple being ok.