Algorithms versus the True
Tracing limits to our modern mindsets about solving problems and finding knowledge (Part 3)
In the previous two posts, we explored the way that technique and algorithmic approaches are central to modern ways of thinking about both what we can do or control and how we know things. This mindset was encapsulated in the ‘Rosa-Ellul Thesis’ that the driving force of modern life is the conviction that the world is knowable/controllable by means of applying or adopting efficient algorithmic approaches for all questions and issues.
As we look at modern societies, this thesis is evident is in our confidence that we can control the world and in our epistemic confidence in the knowability of things. One important question, however, is whether either part of this confidence is justified.
As a belief in controllability requires a belief in knowability, we will only focus on the second: epistemic confidence. And we will follow the Rosa-Ellul Thesis by narrowing in on confidence by means of adopting algorithmic approaches. If this isn’t justified, then our confidence in the controllability of the world will also be shaky. In turn, that would mean the driving force of modern life is going in a wrong direction.
The core question is therefore whether algorithmic approaches, or what Jacques Ellul refers to as technique, are sufficient to provide us the knowledge we expect or need. Can algorithms - sets of procedures guaranteed to find the solution to a problem or finite sets of unambiguous instructions that can be performed in a prescribed sequence to achieve a certain goal - give us confident knowledge? Or are there inherent limits to what they can achieve?
We will argue that there are inherent limits to algorithmic approaches, which means that they do not provide us with a solid basis for the epistemic confidence we have been examining. Other ways of knowing may, or may not, be better for providing us with knowledge, but algorithms cannot justify our cultural confidence in our knowledge.
As with previous work, the arguments will be sketched rather than demonstrated rigorously. Readers are asked to decide whether it rings true with their experiences. This is partly driven by time and space constraints, but there is also a deeper philosophical point.
It is unlikely that a proof, or rigorous demonstration, that algorithms are not able to give us definite knowledge is possible. For one, it would share the problems inherent in trying to prove any negative: it may turn out that there is some approach or information (or a new algorithm) we know nothing about at the moment that turns out to disprove the negative. More importantly, any definitive proof would have to rely on a definite process that gives us a determinate answer. That is, a conclusive proof that algorithms cannot give us definite truth would most likely itself have to depend on an algorithm.
Limits to algorithmic knowledge
To begin looking at limits to algorithmic approaches, we will follow a common internet practice and start with a few quotes from Albert Einstein:
Imagination is more important than knowledge. Knowledge is limited. Imagination encircles the world.
I think that only daring speculation can lead us further and not accumulation of facts.
To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and makes real advances in science.1
These are genuine quotes and make it clear that one of the greatest scientists of the twentieth century didn’t believe that algorithmic approaches were sufficient, or even really very useful, for science and discovering knowledge.
Algorithmic approaches also have inherent limitations in many ordinary, simple-sounding problems, especially when the number of factors starts increasing. For example, there is a long-standing mathematical problem about how to devise an algorithm for dividing a cake (or anything) fairly amongst an arbitrary number of people in such a way that everyone is guaranteed to be happy with their share. In 2016, there was a major breakthrough as a team finally produced an algorithm that would work for a group of any size. There is a downside with it though: Even for just a handful of players, th[e] number [of steps] is greater than the number of atoms in the universe. If we cannot solve a simple, every day problem like this with a practical algorithm, then algorithmic approaches to knowledge have significant limits.
There are a range of different fields of study, from chaos theory in mathematics, to complex systems across many disciplines, to cross-cultural issues in anthropology or sociology that also pose questions for or proscribe limits on algorithmic approaches. Readers can look into these if they are interested as here we will instead provide a more personal example that readers will be familiar with.
We will examine the question of whether ordinary, internet research - finding knowledge online - can be comprehensively tackled algorithmically. To do this, we want readers to think about how they would go about doing the research to answer a question they don’t know the answer to, and isn’t a clear question of fact that will be definitively answered by Wikipedia or the top link on a search engine. It could be deciding on a watch or phone or gadget to buy, or investigating the flora or fauna in a region, or a question about Roman or Chinese or Mayan history, or just about anything else.
Is it possible to come up in advance of the research with an algorithm, that is a finite set of unambiguous instructions that is performed in a prescribed sequence, that will reliably answer the question without having to add anything in along the way? Importantly, for it to be a true algorithm, the instructions must be such that there is no room for subjective judgement, feelings about the reliability of sources, or instincts about what to look at or trust.
Readers might like to try writing out an algorithm and then seeing what they can find out based purely on their algorithm. The procedure embedded in an algorithm may well help get to an answer, but my experience is that it is never enough by itself. We always need some elements of creativity, linking disparate insights and human judgement to get to the bottom of various questions, even when all the information we need is easily available online.
Similar experiences are often found in the many situations where we try to understand and make decisions about people based purely on survey data or forms, whether it be in assessing people for social security, job applications, personality assessments, or matchmaking apps. Routinely the survey approach, which is inherently algorithmic, fails to capture something important and ends up leading to bad decisions or information.
Hopefully these few examples illustrate ways in which algorithms have limits in producing knowledge. They may be useful but are never fully reliable, miss important factors and, following Einstein, don’t help find out new things that haven’t been already discovered. That is, there are clear limits to algorithmic approaches to knowledge. They cannot guarantee truth nor justify the thorough-going epistemic confidence our culture presumes.
This means that the conviction at the heart of the Rosa-Ellul Thesis, that the world is knowable/controllable by means of applying/adopting efficient algorithmic approaches is not justified. In this sense, the driving force of our modern life is pushing us in the wrong direction.
The importance of epistemic humility
This is a significant conclusion that connects back to the opening of the first post in this series where a range of critiques of our current society identified it as inhuman in various different ways. We argued that this inhuman nature comes from the reliance on technique and algorithmic approaches built into our core cultural mindsets. This post has argued that this reliance on algorithmic approaches does not give us the certainty or control we expect it to, and therefore the societal problems are compounded by the ineffectiveness of the approaches we are trying to use.
We cannot improve our society and make it less inhuman by continuing with our same mindsets and just finding better ways of doing things: different programs, or policies, or societal structures won’t solve the problems. Instead we need to change our mindsets about how we understand the world; what we can achieve and how we generate the knowledge on which we act.
How to go about improving society without solely relying on policies, procedures, methods and algorithms is an extensive topic and I expect it to be an going theme here at Humble Knowledge. For example, there will be a post soon on what epistemic humility might look like for each of us individually. Others are also writing about this and, for one, there is an interesting recent example (in an unexpected source) in this article by Sinead Murphy on conviviality.
Nevertheless, I will venture some preliminary points about what directions might be fruitful. I see a core part of this task as taking some of the strengths of algorithmic approaches but respecting their limits in understanding our (as individuals and as groups/societies) relationships with others and the world.
Firstly, universal solutions that apply everywhere to any problem will likely have increasingly limited effectiveness and are likely in practice to create more problems than they solve. The inherent algorithmic nature of universal solutions places limits on what they can achieve and it seems very possible that we are reaching the limits of what can be achieved with these. Instead, local, personal, and embodied approaches are likely to be increasingly essential for solving the problems we face today.
Secondly, computing and AI are by definition algorithmic and will not provide most of the transformative solutions many people are hoping they will. The limits are already becoming apparent in numerous ways as the hype around AI again is failing to be delivered, especially around any applications of truly autonomous AI.
Thirdly, if our implicit epistemic confidence is unfounded, we will need to start recognising the limits to our knowledge and acknowledge that others may have more of a point that we think. It follows from this line of thought that we need to be treating others with respect, tolerance and grace.
A quick personal note to finish: the scope of my intellectual project here is expanding faster than I expected. If anyone who is working on similar ideas is interested in getting in touch or collaborating, I would be very interested.
There are many dubious quotes attributed to Einstein on the internet on these topics. These have all been taken from https://en.wikiquote.org/wiki/Albert_Einstein