The Algorithmic Path to Knowledge?
Modern mindsets about solving problems and finding knowledge (Part 2)
In the previous post, we explored a dominant cultural mindset in our modern industrial world that presumes we can control the world. More specifically, we drew together ideas from Hartmut Rosa and Jacques Ellul to argue that the driving force of modern life is the conviction that the world is controllable by means of adopting efficient algorithmic approaches for all issues. This ‘Rosa-Ellul Thesis’ explains why so many critiques of current societies identify them as inhuman in some way: the algorithmic takes priority over the human.
The previous post did not explore the epistemic aspects to this mindset. Given that knowledge is the focus of Humble Knowledge, this post will explore the epistemic attitudes inherent in the described mindset. Most notably, it presupposes a couple of key epistemic attitudes: a strong epistemic confidence that we can achieve widespread knowledge, as controllability requires knowability (we cannot control what we do not understand); and the conviction that algorithmic approaches are sufficient to provide us knowledge.
While the analysis in the previous post provides some evidence for these attitudes, they are independent claims that need to be justified. This post will cover that evidence, although more as a sketch than a detailed argument. It is up to the reader to connect it with their experiences and counterarguments are welcome in the comments. A third post in this series on algorithmic approaches will tie it back to our big theme of epistemic humility and ask whether algorithms can provide the epistemic confidence we desire.
Before the main argument of this post, a few clarifying notes on the core claims and methods involved in this analysis are relevant. Readers, if they choose, can safely skip this section to get to the main argument.
Some notes on the core thesis
To understand the Rosa-Ellul Thesis, it is important to note that the driving force, or totalising mindset (à la Ellul), should be interpreted as a starting point or assumption in how we - in this cultural mindset - approach the world. We haven’t thought carefully and analysed the situation to conclude the world is controllable. Instead, we start with the presumption that we can and go from there.
While it is clear in very many cases that we haven’t achieved control, the presumption is that the world can be controlled and that we should be able to do it - if not now, then at some point well within reach. This mindset is especially evident whenever something serious goes wrong, perhaps the war in Ukraine, a natural disaster, or a pandemic. There is invariably a public outpouring of anger and fury: this type of thing shouldn’t happen and someone should fix it now.
To be clear, the claim here is not that this mindset is universally held. The focus of analysis is on the dominant views that drive influential cultural debates. As a matter of numbers, it may be the case that these views are only held by a minority of the population. Yet these are the mindsets that are on display in the media, politics, public academic discussions and public social media like Twitter. To draw on a concept from political science, we are focused on the mindsets that determine the Overton Window - the range of ideas and options that are currently acceptable.
Within the group of people who have this mindset, there are also a range of attitudes to the level of controllability that can be achieved and when. Some clearly believe that the world is already controllable today, if only we could convince people to apply the right techniques. Others see the control as something we are working towards, with a range of implicit views about when we might be able to achieve sufficient control. There are also different assumptions about the level of control that can be achieved by different actors or groups. Individuals, for example, often look to governments to control things that they cannot themselves.
Nevertheless, the common thread in these views is that the world can be controlled by humans and that some form of algorithmic method is the approach that will achieve this.
Our modern epistemic confidence
It is unsurprising that our modern culture might possess a great epistemic confidence. As can be seen in the partial history of Western philosophy already published, the default in human thought is to presume we do know with confidence or certainty what we think we know. This human default has been strengthened psychologically and philosophically by the converging trends outlined in the discussion of misinformation. The social psychology of digital technology reinforces our sense that the truth is easy to find and certain at the same time as our major philosophical traditions - the analytic approach that relies on science and the Continental school that tends towards a form of relativism - reinforce confidence in our knowledge.
Our cultural epistemic confidence can be seen in the common exhortations to do things like ‘trust the science’ or ‘believe the experts’. The implicit conviction in those is that we (or someone) definitely knows the answers and we just need to take that on board. In other words, it expresses a profound confidence in the knowledge that we humans have.
While these exhortations are not universally held, and many are skeptical about the motives of those using them, the important point is that they have significant cultural force. Those exhortations tend to land with the audience and, if there is disagreement, as often as not it is about which experts or which science to trust. The terms of disagreement presume significant confidence in our knowledge.
Again, the covid pandemic provides a good case study of this dynamic. Despite the state of our actual knowledge being uncertain and rapidly changing, it was rare that public figures admitted that uncertainty. Moreover, the reactions to those who did tended to be very negative. As our cultural mindset assumes epistemic confidence, admitting uncertainty or limits to our knowledge was experienced as an affront to that mindset.
Algorithmic knowledge
The epistemic confidence of our modern world is likely familiar and not hugely surprising. Less obvious is our conviction that algorithmic approaches are the method by which we can find knowledge. As a reminder, we are using the term ‘algorithm’ in its traditional sense, as a set of procedures guaranteed to find the solution to a problem. The modern use of algorithm that is primarily about software and computers is a subset of this conception.
As algorithms are procedures guaranteed to find solutions, an algorithmic approach to knowledge will necessarily adopt processes or methods that give definite, repeatable answers. To get such definite outputs, algorithmic approaches therefore need determinate and unambiguous inputs - which means they have to rely on things like facts and data. Insights or experience, for example, are not sufficiently determinate and unambiguous to be safely included in algorithmic approaches.
Our modern cultural mindset has both a preference for data and presumes that the facts are unambiguous. Where claims of knowledge are made, then the responding demand is typically for the data to support that claim. Moreover, the inherent belief - on many sides of different debates - is that once we have the data and the facts, the answers will be clear.
Our methods for understanding data are also typically algorithmic. For example, the majority of social and medical statistical research relies on a statistical test of significance that assumes anything with a ‘p value’ of less than 0.05 is meaningful and therefore the observed relationship is presumed to be true. Despite voluminous criticisms of this approach from statisticians, it remains widespread practice as it provides a fairly straightforward algorithmic approach to knowledge. You get a definite answer out of a definite process.
Data analysis and models are increasingly central to many areas of research. Data models are, by definition, algorithmic as you simply need to identify the 5 or 15 or 50 key data points and run them to get a determinate answer. Increasingly, these models are used to revise and adjust the actual data we have in the conviction that this will give us something more true than the data. Whether or not this is generally or ever correct, the fact this type of approach is being used points to a clear preference for algorithmic approaches.
More broadly, the value of research is increasingly judged on an algorithmic basis, including citation and impact metrics. Importantly, at the cultural level, the process of peer review is taken to be an algorithm that guarantees reliable outputs. If something isn’t peer reviewed, it is viewed as inherently untrustworthy - despite many, many critiques of the effectiveness of peer review as a process. Culturally, we need a process that gives a determinate answer to the quality of research. That is, we rely on algorithmic approaches to knowledge.
This approach can also be seen across many more mundane areas of life. If you don’t know who to vote for, or which insurance policy to buy, or which gadget to buy, you can easily find a number of websites that promise to solve it for you: simply answer a set of questions and the website provides you with an answer. People have gone to the effort of building algorithms and coding websites or apps to help answer many of life’s important and trivial questions.
And, of course, there are many ongoing projects to try to use digital algorithms to decide on truth, fake news or misinformation.
The list of examples could continue for a long time but it is worth highlighting one more from a field that one would think is opposed to algorithmic approaches: the range of currently popular ideas following on from critical theory and post-colonialism. Central to these theories are a strident critique of the ways dominant cultures adopt methods of determining knowledge that marginalise and damage others. This sounds instinctively anti-algorithmic. However, the culturally dominant set of ideas today relies on concepts of intersectionality and constructs hierarchies of privilege that operate on an inherently algorithmic basis. A definite hierarchy of privilege is constructed that provides a neat mathematical and algorithmic approach to deciding worthiness. The cultural power of the algorithmic even pervades areas that one would think should reject algorithms.
We have only illustrated the way that an algorithmic approach to knowledge is built into our underlying cultural assumptions. This approach is expressed as the presumption that we need to follow the right process or method (i.e. the right algorithm) to find the truth or solve the relevant problem. Human skill, ability or insight to take one alternative is not preferred. This typically translates into a strong preference for data and facts as these can be integrated into processes.
This cultural assumption implies a belief there is a single right answer to every significant question that holds across all contexts. That is, once we have followed what we think is the right process, we are confident that what we know is true. We ground our cultural epistemic confidence in processes or algorithms, although we are continually working to improve the algorithms.
This allows us to articulate an epistemic version of the Rosa-Ellul Thesis articulated above: a driving assumption of modern life is the conviction that the world is knowable by means of applying efficient algorithmic approaches to all questions.
Epistemic Confidence or Humility?
Hopefully, readers will recognise our cultural epistemic confidence and our preference for algorithmic approaches from the sketches provided here. Some may think these are obviously correct and can’t understand how things could be different. Others may see these assumptions as preposterous and an articulation of where the modern world has gone wrong.
Whichever camp you are in, the confidence in an algorithmic approach to knowledge sits against the arguments made here at Humble Knowledge for epistemic humility: that knowledge is hard and we typically can't be sure we've got it right. We can happily accept that algorithmic approaches are useful and help us uncover knowledge. The decisive question however is whether algorithmic approaches can give us definitive answers to the questions we have that are reliable as knowledge, or whether there are inherent limitations with what they can achieve. That is the topic for another post.
Thanks Ryan. I also found this very interesting. I’m looking forward to your foreshadowed future post on whether algorithmic approaches can give us definitive answers to the questions we have that are as reliable as knowledge. I’d particularly like to understand what you mean by knowledge in that context, as it seems to me that algorithmic approaches do provides us with some (perhaps the best) form of knowledge that we can use to effect beneficial outcomes for ourselves and others. For example, while reading this I couldn’t help wondering what’s wrong with getting an answer from a website about the best insurance policy or gadget to buy if you have a clear set of criteria by which you want to make your decision (price, features, availability etc). In those cases the algorithm is guaranteed to find the solution to your problem by identifying the cheapest and best featured (for you) available policy or gadget. What would be a better method for deciding? (In asking this, I’m outing myself as one of those readers you referred to in your previous post as finding algorithmic approaches to solving problems as intuitive and natural.) More specifically, how, in practice, would “human skill, ability or insight” be better applied in such cases given the inherent subjectivity — and often unreliability — of such attributes? Even the election question could in fact be usefully solved for you by an algorithm if you happened to have simple criteria by which you wanted to make your decision (e.g. which party has the policy that would increase my after-tax income). Algorithms can’t help in decision making if you’re unable to articulate a set of criteria by which you intend to make the decision, but I suspect there is some kind of checklist process going on in our brain for most complex decisions we make, even though we might not admit to it (for sound evolutionary reasons we automatically look for certain criteria in, say, prospective mates even if we can’t articulate them). In deciding whether to marry someone, for example, maybe an effective algorithmic approach is often applied sub-consciously along the lines of answering just a few simple questions like: “Do I want to get married now? Do I want to marry this particular person [which might be broken down into other questions relating to feelings of love, trust, respect, etc)]? Do they want to marry me? Are there any reasons not to marry them [again, this could be broken down into things like current marital status, criminal history, parental disapproval, etc]?” I wonder what a better way to make the decision would be. I suppose you could toss a coin, or consult an astrologer, or ask someone else to make the decision for you, but most of us don’t make significant decisions in that way.
Also, I didn’t understand how there can be “a dominant cultural mindset … that presumes we can control the world” if, as you suggest, it may well be a mindset held by only a minority of the population. In any case, I’m not convinced that there is a dominant cultural mindset that we can control the world and I think most people (operating well within the Overton Window) would agree that the unpredictability and potential influence of human agents (presidents of Russia, say) inevitably rules out orderly control of human affairs, and that controlling complex systems operating in the world like weather and climate is now and always will be just a pipe dream (my understanding is that chaos theory would preclude this in any case). However, I wouldn’t be surprised if most people believe that we (i.e. humanity) have a fair amount of control. Indeed history demonstrates that we have been able to exert a great deal of influence (albeit not absolute control) over things that we care about that can affect our experience of the world (having enough food to eat, improving our health, minimising non-consensual encounters with sharp-toothed animals, etc). And we have got there often because we have rightly trusted the science and we have wisely listened to the experts applying the right techniques. I expect that as our technology and knowledge improves, we will have more and more influence over the world and our experience in it, while always falling short of absolute control. Personally I wasn’t angry or furious about corona virus, nor was I particularly surprised by the outbreak of a pandemic, but I was reasonably confident that (as in fact happened — and surprisingly quickly) science would employ data and facts to find a way to significantly minimise its harm (while not gaining complete control). Is that what you mean by possessing epistemic confidence?
Fascinating as always.
Is there merit in exploring the difference between gaining and having knowledge and using knowledge? Even if you believe the world is knowable in theory, few would argue that it is known today. This leads me to wonder whether there is a form of certainty / uncertainty boundary or frontier individuals and society is constantly navigating, which necessarily requires responses to point in time unknowability.
I wonder also whether the above becomes important as we consider the role of algorithmic approaches. One use of algorithms is to predictably manage the certainty / uncertainty boundary by creating decision making rules (a heuristic). In essence these rules are designed for the many (the average) but are often applied to the one (the individual). Mistakes are inevitable - both systemic (bias - defined broadly) and random. The alternative, however, individualised decision making based on experience and judgement has the same problem (see Kahneman et al Noise).
Not sure where any of this leaves us of course.