The question of second hand knowledge
What does knowledge look like when we avoid philosophical stereotypes and start thinking about the social dynamics of knowledge?
This year has begun with an effort to move my writing beyond the typical philosopher's conceit that we only really need to focus on "an individual person, operating largely by themself and mostly preoccupied with knowledge rather than living." I began by looking at the connection between knowledge and action, given how much we value knowledge for its practical ability to help us do things well. However there is an even more egregious limitation to the philosopher's conceit I need to address. Knowledge is not an individual pursuit but is, for humans, a highly social activity. Most of my knowledge isn't something I came up with myself, but has come from someone else.
This is particularly relevant to the thinking dynamic that I have argued is the core engine of human knowledge, namely that we "(i) pay attention to the world (observe), (ii) build the best theory we can that explains what we have observed, and (iii) apply the theory to the world to use it and test if the theory is true." How does this work when my knowledge, for example that Alpha Centaurii is the closest star in the universe to the Sun, has nothing to do with my own observations, theories or efforts?
My starting point will be to pay attention to how these dynamics work in practice, rather than thinking about what knowledge we should, or shouldn't, accept. This means that I will focus on the ways a person accepts information as reliable knowledge, whether it comes to them from an academic expert, a mate at the pub, their parents, an official government report or a spiritual vision. The underlying structures of how we accept something as knowledge are the same and, as always, are never fully reliable.
For want of a better term, we will refer to all knowledge of this sort as second hand knowledge, as we have got it from someone or somewhere else. This doesn’t imply that it is inferior in any way. The ability to share and use second hand knowledge is one of our human superpowers and our entire societies and economies are built on it.
Observations
A useful starting point for thinking about second hand knowledge is to consider why we accept some information as reliable and reject other information. Let me make a number of observations about this, although I am very keen to hear in the comments about what is missing.
The most obvious reason why we accept some things and reject others is that it depends on whether we trust the source of the information. If the person telling me the information, or the place I am reading it, is someone or somewhere I trust, then I will likely accept the information as knowledge without thinking too hard. On the other hand, if I don't trust the source, I am highly unlikely to accept anything they say.
Our assessments about whether we trust different sources are very often tied to particular areas where we consider they have expertise: I will trust my dentist about teeth but probably not about car engines. So it is more a question of whether I trust a source in a particular field, rather than whether I trust them in general.
Trust in a source is not normally a binary yes/no question. Instead there is, like with credences, a spectrum. We trust some sources highly, others somewhat, and some people not at all. Similarly, our decisions about what information we rely on often comes down to a relative rating of trust: which source do we trust more?
The trustworthiness of a source isn't the only factor relevant to whether we accept information as knowledge. It also depends crucially on whether the new information fits what we already know, or believe.
If some new information comes from a trustworthy (enough) source and lines up with what we know, we will readily accept it. If it comes from a source we don't trust, and contradicts what we already know, then we will easily reject it. The other options are more difficult. For example, there don't seem to be universal rules about what to do if a source we trust informs us of something that contradicts what we already consider we know.1Our credence in some piece of second hand knowledge can shift over time if our trust in the source shifts. If the source is later discredited in some way, we will typically reduce our credence in the information they have given us. Or if a source we didn't trust turns out to be more reliable, we will likely increase our credence in their information.
This dynamic also leads to trust (or lack of trust) flowing from the information to the source. If we get information from a source and the information later turns out to be reliable, this will increase our trust in the source, or vice versa.
These observations skate across the depth of real messiness and complexity that arises even if we wanted to map out the networks of second hand knowledge for one person. These networks of trust shift over time and, notably, our trust in particular pieces of information can shift due to effects that are several steps removed from the information. It is not uncommon to discover that a particular fact I believed was wrong, which in turn means I no longer trust the person who told it to me, and in turn start disbelieving a further, unconnected series of information that I also learnt from the same person.
This dynamic is not surprising - it is a crucial plot point in many detective stories - but is not something I have seen in any philosophical accounts of knowledge. There are arguments between, say foundationalism and coherentism, about the correct logical structure of knowledge, but little acknowledgement that, for each of us as individuals, who the source of some information is a critical part of how we build and trust knowledge. Who the source is for a belief is often more practically important for what we count as knowledge than the logical relationships between different pieces of knowledge.
Our epistemic engine still works
Nevertheless, the core engine by which we know things second hand appears structurally very similar to the model articulated above - just with different inputs. I would summarise this dynamic as:
(i) We get information from a source, with an initial assessment of how much we think we can trust it.
(ii) If we can, we fit this information into our existing theories and what we already know, including creating new theories where needed.2
(iii) Over time, the new knowledge gets tested and then we update the previous steps where needed. Sometimes this update focuses on the information and others on the reliability of the source.3
This is the same structure as the Observation, Build Theory, Test dynamic that I have explored previously, but it is clearly messier. The simple reason is that, once we move from direct observation into second hand knowledge, we have to deal with the various uncertainties of dealing with information in a world where knowledge is hard to find and we have to rely on proxies of reliability (like whether we trust the person telling us).
Who matters as much as what
Taking this dynamic seriously leads to a few more practical suggestions.
As noted, our credence in information we have received second-hand depends importantly on how much we trust the source, and vice versa. This means that, if you want to convince someone of something, it always helps to find a source they find trustworthy (whether a person or an organisation) to say it. And likewise, if you want to reduce someone's trust in a source as an epistemic authority, then you should try to show how that source has said something crazy or wrong.
But we often forget this fundamental pairing between information and the authority it comes from, especially when we have accepted some items of knowledge so thoroughly it no longer needs sourcing. One common example occurs when government policy is explained or justified with economic analysis. For those advancing these analyses, they are highly reliable and go without saying. For many others, however, this type of knowledge is fundamentally tied to their trust in economists as an epistemic authority - which isn’t high.
Put more broadly, based on the dynamics of how humans operate, there is nothing that is a free-floating fact that justifies its own truth. Instead, given the mechanics of how people accept knowledge, any fact has been established by someone (person or organisation) and accepted by others as second hand knowledge - hopefully with various others testing and verifying it along the way. The reliability of a fact, in the end, relies on the chain of epistemic authority that supports it.
A second suggestion is to remember that, as the core dynamic in human knowledge moves from observation through theory (or representation) into testing, it is the testing stage that is, in the end, decisive. We have many situations in our lives where we are trying to shift someone's trust in a person or organisation as an epistemic authority. It might be convincing a relative that their new friends are lying, or that a particular expert is a fraud. While we often try to attack the credibility of the person directly, a more effective approach would be to encourage people to take the source seriously and test whether their information stacks up in practice. If it doesn’t, this will undermine the source as someone to trust.
A final suggestion is that, to adopt a set of computing metaphors, the way we 'save' knowledge in our memory has a sophisticated structure. A naive view would see our knowledge saved in a giant database that simply records all the facts, theories, etc as individual entries in the database. What this account suggests is that all these entries are accompanied by extra information such as our credence in the information (i.e. how much we trust it), the source of the information and how much we trust that source. All this additional information, which we'd probably label as ‘meta-data’, is dynamically updated as more information comes in and is an important factor for what data we take as justified within the database.4
This all means that my explanation of the dynamics of knowledge generation needs updating. The structure still seems valid but the details were overly simplified. That is for a future post.
Some readers might enjoy mapping these two dimensions - trust in the source and fit with existing knowledge - on axes and considering how people react in each of the four quadrants.
If we can't fit it, the normal approach is to reject the information. But occasionally it is a point that causes us to completely question our existing knowledge.
Given the nature of second hand knowledge, this testing is often against collated evidence and other knowledge claims from other people. The testing itself often isn't direct.
I'm interested in whether there are any mathematical approaches to information processing or probability that adopt a similar structure. The thought here has some structural similarities to Bayesian approaches. Please do let me know if you know of any.
re: footnote 4, absolutely, there are some ways this resembles Bayesian — if I am reasoning about how to update a coherent unified belief, incorporating many related (second-hand) statement about states of the world, I can imagine using Bayesian updating to combine them. But there are also dis-analogies with Bayesian reasoning: Second-hand knowledge bundles up many things; not just facts, but signals of allegiance or affection, *vibes*... and it does not come bundled with anything like a reasonable credence, usually, but something more like a fuzzy provenance. "My friend said that guy hates us" is concealing a surprisingly long chain of reasoning. I think we find it easier to map human reasoning on to Bayesian models in constrained domains where we can make the content of speech, and its truthfulness more "credibility-of-factual-statements"-like, as in for example, scientific publishing or in prediction markets. Formalising it more generally is I think, difficult. Maybe ecosystem-like metaphors are better there? we can imaging which discourse organisms flourish in which environment, and it might be many trophic levels removed from the nourishment provided by photosynthetic energy from the light of knowledge
Excellent.
I wonder of it is worth exploring how your thinking might translate into building broadly trusted sources of societal information, and when these should and should not be used. Perhaps we should also be thinking about situations where reasoned thinking leads to different but incompatible solutions.
On the Bayesian discussion, and you know I am not an expert, developing a model (structure) seems worthwhile as a mechanism for explaining what you mean. But I doubt using the model would be viable beyond this.