What we can know about knowledge
Building a new, human-centred, epistemology that recognises our limits
From the start of this year, much of my writing here has focused on building a new epistemology based on humility and limits. I have been examining aspects of what human knowledge is, and what it isn't, in order to rethink our understanding of knowledge. Given this has evolved over many different posts, I thought it would be helpful to bring it all together into a summary of this new epistemology or account of knowledge. As it is only a summary, you will need to follow the links for more detailed arguments.
My starting point is that we are, obviously but importantly, humans with particular ways in which we operate, conceive of and interact with the world. We cannot remove these human elements from epistemology as we would then build an epistemology for deities, aliens or something else. Instead, we need to understand how knowledge works for humans. This has a range of consequences, including needing to give up our desire for epistemic certainty.
Definitions of knowledge
To explore epistemology more precisely, it is useful to understand something of the dominant approach within much modern philosophy. Analytic philosophers habitually try to understand knowledge by defining the conditions under which it is correct to say that a person A knows a statement or proposition P. The starting point for this approach can be found in the following definition:
A knows that P if, and only if, (i) A believes P; (ii) P is true; and (iii) A's belief that P is justified.
While there are widely acknowledged flaws in this specific definition of knowledge as ‘Justified True Belief’,1 the philosophical work has mostly gone into providing an alternative definition. However, there are some significant flaws with the whole approach.
Firstly, given it is inspired by formal and mathematical methods, it leaves the human out of the equation. When we consider how it works practice, what looks like a formally precise definition ends up messy and circular. A definition like this works if all the different terms in it are independent of each other, but this fails for this definition in real human situations.
We can most easily see this arise if we consider the definition from the perspective of 'We, the whole human race' rather than one person A. How can we (the human race) come to be sure that some statement P is true independently of us coming to know that P or independently of our justification for P? Our process, any time we want to do it in practice, for deciding P is true, or establishing we know P, is exactly the same as our justification for P. So it makes little sense to look at these concepts separately to decide whether or not we know that P. In other words, the definition doesn't provide us with anything substantive about what knowledge is.
To look at the same issue from a different angle, this definitional approach requires us to have an objective method of deciding whether any statement P is true that is independent of the validity of A’s justification for P. However, as argued previously, there is no neutral way to choose between theories and we humans do not have access to an independent, objective perspective. We are not God and cannot transcend our human limits. A formal definition like this fails to account for the reality of human knowledge.
Moreover, this approach fails in another crucial way. For humans, the basic units of knowledge are theories and abstract representations, rather than facts and observations. Human knowledge is built out of plural structures and the truth of individual statements or facts depends crucially on the broader theories or worldviews they are expressed within. The definitional approach outlined above focuses on knowledge of individual statements and so it assumes the opposite, that knowledge is built up out of individual statements.
So how does human knowledge work?
As noted, human knowledge is made up of plural abstracted representations of the world - commonly theories, worldviews, or accepted stories of various sorts. These abstracted representations can be thought of like pictures or models of what the world is like. Of note, humans are finite and our capacities for understanding cannot comprehend everything about the world - so these representations can only be partial and are more like sketches than photo-realistic pictures.
The way that we think in terms of representations that try to build a coherent picture of reality is reflected in the language we use when trying to figure out what is true. When things are murky, we inevitably construct different theories to try to explain the situation and look to decide between them. We might weigh up different theories about who ate the last cookie, or different theories to unify fundamental physics. At an even more comprehensive scale, religious or civilisational worldviews are different, often competing, abstracted representations or theories that provide a coherent theory or picture of reality. This means that, in practice, we humans know theories first and then statements or facts second.
Obviously, just having a theory isn't (or shouldn't be) enough for it to count as knowledge. To get to knowledge, we have to decide or judge that the theory is true. To do this, we compare the theory with the parts of reality it is trying to describe or explain. This can occur directly (e.g. a scientific experiment or interviewing people involved) or indirectly by comparing it with theories and information we have already accepted as true.
Importantly, just as the theory we are testing is typically made up of various different parts, the testing process is rarely simple. We need to test various aspects of the theory for accuracy against the world. We cannot just test individual facts or hypotheses within the theory. We accept that a theory is true, and therefore that we know it, if it stands up as a whole to our testing as an accurate description and prediction of reality.
Importantly, this testing cannot occur from a neutral or objective position where we see reality or the facts independently of our theories or representations. Being human means that we cannot transcend our existing knowledge, theories, subjectivity and abstracted representations. Instead, we can only test from within the framework of a theory. This is more intuitive than it might sound. In practice, we let a theory explain what the world will be like on its own terms and then we can test whether reality fits these predictions.
This is one reason why, for us as humans, the process of comparing and deciding between different theories is imperfect and imprecise. Two competing theories can provide different ways of assessing different predictions. It is common that both assessments are somewhat inconclusive and so we don’t have a definitive answer. Moreover, we don't have infinite time and resources and a lot of the information we have is to some extent vague or ambiguous.
Very often the best we can do is to accept one theory over another as it is the best fit based on the current evidence. Where a theory is clearly a better fit to reality over any competitors, we will then decide (with justification) it is true and therefore accept it as knowledge.
Necessary humility
This process means that there is often (perhaps always) the chance that what we know is open to revision as new evidence or new ways of testing emerge. This assertion of epistemic humility is what we should expect of a human-centred understanding of knowledge given everything we know is more a sketch of reality than an exact description. Sketches can always be improved or supplemented. This does, however, raise an important question (at least to philosophers): did we really know something if that knowledge later turned out to be incorrect?
In one sense, the answer has to be that we didn't really know it. If true, this would mean that we can never really claim to know anything as it is (at least in principle) possible that everything we think we know could turn out to be wrong. However, this is applying the wrong criteria for knowledge. Rejecting as knowledge anything that might turn out later to be incorrect assumes that we can, and should, achieve epistemic certainty. As already argued, the history of human thought has shown us that epistemic certainty is not possible. We therefore need to accept that, as humans, we live to a lower standard.
Importantly, accepting this lower standard works as a good description of human practice. We humans accept as true (and therefore as knowledge) any theory that we have tested and stacks up as a good description of reality - as long as there isn't any other incompatible theory that is similarly good or clearly better. Where these conditions hold, we see ourselves as justified in accepting something as knowledge, at least until we discover something new.
This may not be philosophically satisfying to many, but it describes how humans operate, clearly works (well enough) and seems to be the highest standard we can achieve. We cannot wait to completely test a theory or to achieve complete certainty as we are limited in time, effort and access to reality. We have to (and do in practice) accept the best we have as knowledge and act on that - until we discover something is wrong with it and then we (hopefully) change.
Humility, not relativism
This account of knowledge and truth leads to one somewhat uncomfortable conclusion, at least for logicians and certain types of philosophers. There can be two competing theories for something and both can similarly qualify as true or as reliable knowledge. As in the case of different pictures of something from different angles, it is possible that different theories equally well describe a situation according to their own internal predictions. If both work sufficiently well, we can count them both as knowledge, although our aim is then be to find a better, bigger theory that incorporates both. This is precisely how many scientific fields, including fundamental physics works. For example, there are situations where both quantum physics and the general theory of relativity cannot both be true - but we accept each of them as true for what they describe well and continue to look for some way to resolve the contradiction.
Importantly, however, this does not mean we end up with the epistemic relativism that we often hear. A good example that people will sometimes talk about 'my truth' and 'your truth' as though these are separate, incompatible and equally well justified. The test of whether something is true, or counts as knowledge, is not who owns it but how well it describes or predicts reality. It is by testing our theories against something outside ourselves that we are justified in deciding it is true, not the fact that I or you believe it is not decisive. Moreover, we are only justified in accepting something as true until there is a better theory or new evidence available. Willfully ignoring other information because it doesn't align with my beliefs or my truth remains poor epistemic practice.
To summarise, human knowledge is built as humans construct theories, representations or models of the world and test these against reality. These representations are never fully precise and the testing process is rarely definitive as it cannot transcend human limitations. We count a representation or theory as knowledge if we judge it accurately describes reality (so far as it tries to) and there is no other comparably good or better theory. However, this could always be overturned as new evidence or a new way of testing may be discovered.
Moreover, this fact knowledge is always potentially defeasible simply describes some of the limits of human knowledge. Epistemic certainty is not possible and so we must always act on the knowledge we have while needing to remain humble about it. We may yet turn out to have got something wrong.
Responses and comments are very welcome as I am hoping to find out where I have got this account wrong.
A fairly detailed overview is at https://plato.stanford.edu/entries/knowledge-analysis/
Neatly resolved. I wonder if it is worth wrestling some more with the dynamics which lead to this result. One element seems to be time. You mention the challenge of establishing knowledge at any point of time. Your description seems to presuppose two things: (1) the absence of 'complete' knowledge today; (2) the potential for more knowledge tomorrow. It also seems to lean heavily into our humanness which itself has three dimensions: (1) we are not God and therefore absolute (transcendent) knowledge is beyond us (which means that while more knowledge is possible complete knowledge is not, no matter how much time we have); (2) we are diverse in our thinking traditions and experience (which naturally leads to differential theories and differential interpretations of the accuracy of theories); (3) we are interdependent (none of us operate as individual thinking machines, consequently our view of 'knowledge' relies heavily from a set of relational (trust based) conditions rather than a set of analytic ones. In combination, these factors mean that epistemic certainty is beyond our reach. An interesting question you may not have answered is why these factors (assuming I am close to understanding your proposition in the way you do!) do not lead to a conclusion that we should be epistemically skeptical. I suspect your answer lies in the practical (humility seems to work ok) but I am not sure you have fully made this case.
It is useful for me that you start with a simplified example of risk assessment. For quite a long time in my ancient past I was involved in risk assessments that required scientist inputs (expert judgment). It was my experience that some scientists, irrespective of their level of expertise had great difficulty doing risk assessment. Quite a few of them should be kept well away from the responsibility. Leaving aside the influence of social environment, at the time I put their problem down to doctrine(s), for the more meticulous perhaps a doctrine based on an understanding that knowledge could only be acknowledged if sufficiently verified in theory. There was never enough.
Recently I have come across the work of Iain McGilchrist (see his massive argument base in 'The Matter with Things'), which might begin to offer an intelligible explanation for, among other things, 'doctrines'. I wonder if these doctrines might be related to 'mental maps' that I understand neuroscientists look for, and find mostly though not entirely in human cognition. (Some other creatures appear to have mental maps.)
My 2nd related thought might contribute something. Animals have highly developed skills in discriminating for example food from non-food in complex reality, requiring little obvious training via trial and error. Despite this, error can be serious. We have cases in the UK where river gravels are contaminated with small metal lead shot / weights. Swans need gravel in their indigestion and are known to accumulate lead poisoning. Their otherwise very adequate and instant recognition and participatory skills fail them. ‘Their world' does not tell them enough. I think we can address many of our own limitations plus doctrines, but, even under the specific conditions of scientific investigation we continue to require like the swans what I tentatively call 'participatory knowledge'.
It is possible these days to 'copy and paste' reliable engineering design. ‘Reliability’ more generally though is difficult to mechanise. (See Erica Thompson’s recent ‘Escape from Model Land’) and we are back to expert judgment and the in/ability to see the big picture, the complex reality (McGilchrist again?).