Articulate uncertainties and make better decisions
Our confidence in information is an important input we should not ignore

In my recent article on certainty and action, I noted that we should be thinking about how much we trust information as well as what we know when we look at the connections between knowledge and action. As we’ll see below, this is because our credence, or level of belief, in information, is an important factor in the decisions we make - not just what the information is.
This means that we need to pay attention to how much we trust our information when making decisions. If our credences are inaccurate, then our decisions won't be reliable. And whenever we assume more confidence or certainty than is justified, as I have argued we often do, we will be making poorer decisions than we otherwise would. To counteract this, I will suggest that we need to be explicit about the reliability of any information we are using, rather than presuming we can have confidence in it.
Let’s provide some evidence for these claims, and we’ll begin with an illustration.
Our level of confidence changes our decisions
Consider a doctor who has to make a diagnosis and decide on a treatment. After they have made a diagnosis, a doctor could have a number of different levels of confidence in it. We'll pick four that span the range of options.
1. The doctor is 100% confident in the diagnosis.
2. The doctor is very confident, say 90+%, but is aware it could be something different.
3. The doctor thinks the diagnosis is most likely correct but has some doubts.
4. The current diagnosis is the best explanation the doctor can give but isn't confident it is correct.
Unsurprisingly, the treatment that the doctor recommends will depend not only on the diagnosis, but on their confidence - combined with their (and the patient's) risk tolerance and the seriousness of the treatment. If, for example, the best treatment for the diagnosis is major surgery, the doctor is unlikely to recommend this treatment if their confidence falls into cases 3 or 4. The intrusiveness of the treatment is probably not worth the effort given the risks that the diagnosis is wrong. Even in case 2, the doctor might order some extra tests to confirm before making the call.
However, if the patient is in a desperate situation, then the doctor could recommend major surgery even if their confidence falls under cases 3 or 4. Their willingness to take risks would be much higher as the consequences of doing nothing are greater.
This interplay between the diagnosis, the confidence or credence, risk tolerance and various other factors is particularly interesting if the treatment is low cost and won't have any lasting consequences - or is even reversible. In this case, the doctor is much more likely to decide on the same treatment regardless of how confident they are. A typical example is prescribing antibiotics for bacterial infections - it is quite common to do so even if the doctor isn't completely sure it is a bacterial problem.1
While the treatment will be the same, the actions the doctor takes still vary depending on their level of confidence. If it falls in case 1, there will be no question as to whether the antibiotics will work and so the doctor probably doesn't think about it again. In case 2, the doctor won't be concerned about the treatment working but won't be entirely shocked if the patient comes back still sick - and probably will already have some ideas about what else it could be.
In case 3, the doctor will probably be recommending the patient comes back pretty soon if the treatment isn't working and may even keep thinking about alternatives in the event they do. In case 4, the doctor will probably book the patient in for a follow up appointment and order more tests alongside the treatment as their confidence it will work is low.
The key point of these examples is that the level of confidence (or credence) in the diagnosis affects the actions a doctor will take, even if the recommended treatment is the same. Sometimes the differences are subtle and sometimes they lead to entirely different decisions. Hopefully it is now clear why we need to have realistic credences for the information we use, if we want to make good decisions.
Traps from false expectations
The prevalence of trust in fact-checking, or peer review as the gold standard, are good examples of our cultural expectation that accurate information is easy to find and so we can have very high levels of confidence, even certainty, in what we know. This dynamic is especially pronounced in government contexts as political imperatives mean that politicians feel obliged to express absolute confidence in their decisions and actions. This sets the tone for the whole system: the baseline expectation is a very high level of confidence. However, this has some notably negative consequences.2
One is that it scrambles our assessments of risk. When the expectation is that we have certainty about our information, we tend to fall into one of two traps. The first is to presume certainty where it doesn't genuinely exist so we can make a decision. This can lock us into high cost or irreversible decisions that have a high risk of failure. It is like locking in major surgery when we are only 60 - 70 % confident we have the diagnosis right. There is a significant chance the surgery won't cure the problem and the patient has to deal with the consequences of the surgery for no benefit. If these risks were known and accounted for, then that outcome is acceptable. But not if everyone was, or expressed, certainty in the diagnosis and it didn't work.
The second trap is to delay making a decision until we have the certainty that we expect, even where it isn't possible. Those who have worked in government, or many other organisations, will have been through situations where decision makers endlessly want some more research, or analysis, or consultation, before they make a decision. There can be various reasons for this behaviour, but a common factor is the implicit demand for certainty before making a decision. We assume that certainty is possible, so keep expecting it. The result is a risk averse organisation that doesn't achieve much.
Assuming certainty is achievable also makes evaluation and learning difficult, and so can doom us to making the same mistakes over and over again. To go back to our example, a doctor who is somewhat unsure of their diagnosis will be naturally open to feedback, re-evaluating their diagnosis and adjusting the treatment where needed. Another doctor who is entirely certain about the diagnosis is far less likely to be open to change and will almost certainly treat the next patient with the same symptoms exactly the same way - regardless of whether it worked.
Given the political imperatives that encourage claims of absolute confidence, government systems tend to behave like the doctor who is entirely certain. Policies or programs are assumed to be effective from the start and the role of feedback or evaluation is to explain how effective they have been - as is natural if we know with certainty they are going to work. However, as this confidence is often misplaced, it easily creates a culture of resistance to evaluation and learning that seems completely at odds with the facts on the ground.
By contrast, if we have an organisation that has opted for a policy, program or plan of action as they think it is the best option, but aren't sure, then evaluation and learning is very natural and likely expected.
False expectations of certainty are an important factor behind many frustrating behaviours in large organisations and governments. Shifting them won't solve all of the problems but will make a difference. Unfortunately, shifting broader organisational and cultural mindsets is very hard.
A modest suggestion
A practical step to start to shift our expectations would be to add confidence ratings, or credences, to key information or recommendations. The idea would be to not just provide the information or the recommendation but add an explicit assessment of how reliable we think it is, or how confident we are that it will work.
This is done in some military and intelligence contexts, where an assessment of the likelihood of a statement being true is provided. The purpose of this is to make decision makers aware of the limitations of the evidence so they can make more informed decisions. As the decision is made with an explicit awareness of the limitations of evidence, the decisions makers have to take into account the possibility that it is made on incorrect information and so won't be blind to a range of risks.
For context, there are a range of issues with relying on probabilities like this and I’d suggest we should focus more on the reliability of a piece of information for decision purposes. For example, some statements are 100% factually correct but only provide partial information and so shouldn't be relied on. What this looks like in practice requires more work, as it tries to make explicit what we often do implicitly.3
Regularly adding this ‘meta-information’ that tags our credence in the information provided is a step that will have incremental improvements. In practice, it will be variously gamed, ignored, manipulated and misunderstood like any other process. Nevertheless, forcing people to explicitly state their confidence in information or a recommendation should sharpen their thinking, as will putting explicitly uncertainties in front of decision makers. They won't be able to blindly assume certainty as often happens now, especially if assigning complete or 100% certainty is banned (as I would recommend).
For information, and by way of illustration, my confidence that adding this ‘meta-information’ to briefings will improve decisions over time is high, but it isn’t strong enough that I think we should mandate it across an entire organisation tomorrow. It needs testing to identify unintended consequences and what extra education people will need to understand the new ‘meta-information’.4
At the core, if the nature of our world forces us to be humble about what we know, then it makes sense to be explicit about this when we make decisions. Assuming a certainty that isn't possible is unhelpful and is an important factor in a number of well-known issues in our modern world.
To be clear, I am not recommending this practice but note that it is common.
I have left case studies out of this section to keep the post short. I am happy to cover case studies in a future post if people are interested.
If technically-minded readers want more info on the difference between probability and reliability as I’ve used them here, please let me know.
If anyone wants to test idea out, please get in touch as I’d love to be involved.
Liking this a lot.
This may be wrong, but my feeling is that you have got three core concepts that work well together. Epistemic attitude provides a starting point for human inquiry and decision making. It tells us about an expectation about what is possible. This expectation translates into a personal credence scale which we use to assess the actual level of certainty we have about information. My theory is that each of us has a unique credence scale, and that these are often misaligned in practice even if our starting attitudes are the same (exploring why this is so might add depth to your analysis). Use of this unique credence scale leads to an expressed level of confidence about the information in front of us. It is this revealed level of confidence that drives the actions we choose individually and collectively.
The idea of adding 'meta information' by way of confidence intrigues me. I can see some value, but doubt that assessments in a way that it consistent enough to be useful in practice. I also whether it distracts from the important role values play in human decision making. Knowledge is a lot, but it is not everything.