Public and regulatory debates about the future of AI are often dominated by the split between the “accelerationists” and the “doomers”. That is, between those, like the CEO of OpenAI, Sam Altman, who see the rapid development of AI as vital to the future of humanity and those, like the former board of OpenAI, who see it as an existential threat to humans. However, on a different and more foundational debate, both the accelerationists and doomers are on the same side. They are both maximalists about the potential of AI and see it surpassing human intelligence at some point in the near future. In contrast, there are are a range of voices arguing that AI is not all it is made out to be, and is not in fact even intelligent. Describing generative AI systems as 'stochastic parrots' or as producing 'derivative sludge' are some of the more colourful articulations of this argument.
Those who read my two posts on AI last year (and if you haven't feel, free do so now!) won't be surprised that I fall more into the second camp. Most simply, and there are others making similar arguments, modern AI has clear limits in its ability to identify what is true or real. All that AI systems can access is data about the real world and, without us training them, they have no way of distinguishing good and bad (or true and false) data. AI might be able to synthesise an incredible amount of information and generate ideas or insights we wouldn't have come up with, but it cannot independently go out into the world and test whether they are true or not.
While important, this isn't the only fundamental structural or philosophical difference between AI systems and humans. My writing this year on the nature of concepts, and particularly the lessons from the Sorites Paradox, illustrates another significant difference. Foundationally, AI systems work in very different ways to human intelligence, and so they are not comparable or the same sort of thing. In this sense, AI is an alien intelligence for us humans.
A divergence on concepts
In my post on the Sorites Paradox, I described two different ways of thinking about concepts. One is a more mathematical or formal approach, in which "a genuine concept is one that is precisely defined and fully determined, so that for every relevant situation it will be clear and decidable whether the concept holds or not." This approach is how we often prefer to work - especially in scientific situations. Importantly, it is how computers, programming and therefore AI systems work. At their core, computers are built on a strict binary - everything is brought back to 0s and 1s and based on a Boolean logic in which everything is either true or false.
By contrast, I argued that most human concepts are "more like sketches of reality, not precise and accurate descriptions." They pick out a few features of reality which help us know some useful things about the concept. They often involve functional or relative factors - and are not universal and absolute definitions as preferred by mathematics, many sciences and programming. Modern AI does not function like this, and - in my view - almost certainly cannot due to the underlying computational architecture.
An example from this same post might help explain the difference. In it, I noted that the differences between our concepts of a chair and a stool are fuzzy and multi-faceted. Some stools have something of a backrest, but if it is a backrest that you can actually use then we'd consider it to be a chair. This means that something might be a chair for a little kid but a stool for an adult. If shown something in the fuzzy zone and asked whether it is a chair, a natural human response is to say 'maybe' or 'it depends on who is sitting on it'.
Now consider how a modern AI system differentiates between a stool and a chair. For a start, it needs to be given thousands of pictures that are labelled as stools or chairs so that it can learn the difference. This training method implicitly assumes that there is a sharp distinction that holds universally, and doesn't allow for changes in context. It also is unable to be trained on functional definitions, like 'a chair is something people sit on'.
The underlying premise of this training, which is necessary due to the way computers function, is that there is a sharp, universal distinction between a chair and a stool. This doesn’t mean that AI is incapable of dealing with fuzziness, but it does it in a very precise, non-human way. AI systems can analyse an image and identify as 78% a match to being a chair, allowing for fuzziness. But this is a precise, mathematical fuzziness that doesn't take into account the genuine real world ambiguities that humans are instinctively and naturally comfortable with.
This means that the underlying method by which AI is able to learn to distinguish concepts is very different to how humans learn and use concepts. AI has to be precise, with little ambiguity, and requires exact definitions. Humans are contextual, relational and often border on being sloppy. We don’t learn like an AI system does and it can’t learn like we do. Its functioning is alien to us.
This distinction closely matches significant differences between what humans and AI can do well. Computers and AI are far better than humans at games like chess and go - situations where everything is precisely defined, and there is no fuzziness and ambiguity. A piece is either black or white, sits on an explicit position on the board, and there are strict rules governing everything. However, language translation or image recognition are much, much harder for AI as they inherently involve partial concepts, ambiguity and fuzziness. This is one reason why an AI system needs thousands and thousands of pictures of an object to reliably identify something while humans often only need one or two examples.
So one of the fundamental building blocks of thinking and analysis - concepts - functions very differently for AI when compared to humans. It follows that artificial intelligence is necessarily a very different type of thing to human intelligence. On many definitions, AI is intelligent but it is not the same sort of thing as human intelligence - it is something else, or alien.
The alien intelligence among us
If my argument here is valid, then it should change how we should think about AI. For a start, a lot of effort has gone into the question of when AI will be smarter than humans.1 But if we are dealing with two very different sorts of intelligence, a general question like this doesn’t make sense. It is like asking whether apples or oranges are the better fruit. It all depends on purpose and context.
If you are dealing with a system with precise concepts, precise rules and no ambiguity - AI is already often more intelligent. Humans chess players now learn from AI chess programs. In other situations, humans have a natural edge (and I believe always will) - unless we come up with a different way of building AI. Most human contexts and situations - relationships, power, systems - involve fundamental fuzziness and ambiguity. There will be limits to what AI can do in those contexts.
Against this, it could be argued that generative AI displays sophisticated abilities at nuance and reading context that are starting to match humans. Whatever you think of the level of current AI, none of this means that we are creating the same type of intelligence, and the way we build generative AI makes it clear how different they are.
Modern generative AI systems need to be trained on vast amounts of data, using models containing billions or trillions of parameters, using server farms that draw as much energy as a town or small city, with constant human reinforcement and feedback to ensure they develop in the right way. After all of this, they are now starting to learn how to do things that human children can do without trying. We are not dealing with the same sorts of thing. They are not different types of the same intelligence.
Understanding that AI is, for humans, an alien intelligence - a different sort of thing - will help us understand when and how we should use AI systems. A useful general principle is that the more precise, unambiguous and self-contained a system we are dealing with is, the more likely AI will be able to outperform humans. Thus, for example, it is not surprising that AI air force pilots appear to be becoming better than humans at aerial dog fights. The more ambiguous, complex, uncertain and fuzzy a situation is, the less AI will be of help. Relying on AI for relationship advice, for example, will always be risky. It might provide some good ideas but will be incapable of the nuance and contextual sensitivity often required.
Seeing AI as an alien intelligence also helps us, as humans, understand what is special about human intelligence. We used to think that advanced abstract thinking skills, like playing chess or go, were the peak of intelligence and incredibly sophisticated. It turns out that they are just incredibly difficult for us, with our limitations, not objectively difficult. On the other hand, identifying objects by sight or making sense of ambiguous information in context, has always seemed quite simple and easy. It turns out things like these are our human intelligence superpowers.
For this discussion, I’m leaving aside all questions about consciousness, will, and other related aspects of human experiences that influence our understanding of mental and thinking ability.
Great text with several good points.
I am curious to know your thoughts on the language that is used when we talk about “AI”.
(I haven’t read your recent posts, so forgive me if you have already addressed this issue.)
It bothers me that we (society) have normalized the use of the word “intelligence” when referring to this technology. (Something which is still in my todo list is investigating the origin and the involved interests in the creation of the expression “artificial intelligence”.)
When one takes a look under the hood, “AI” is merely an application of linear algebra: each “AI” tool involves the use of thousands of equations and thousands of variables; each output is merely a possible solution to this set of equations, a n-dimensional vector in a vector field, which is then ‘translated’ to a string of data that a human is able to recognize and attribute meaning. If we have reason to call this process “intelligence”, then we should consider calling a regular calculator intelligent as well — especially if it calculates square roots.
We have a tendency to use words that denote human qualities and behaviors when referring to inanimate beings, and this is so with “AI” tools: we say it is intelligent, we say we train the model with data, we say we ask the tool a question, we say the tool replied or created stuff.
Since I think language shapes our very thoughts, in my view we need to escape these anthropomorphic terms if we want to escape the “AI snake oil” — which is mostly moved by private interests. The real danger I perceive in “AI” is that which comes with technocrats interests…
This is a very interesting explanation of how GPT models work: https://youtu.be/wjZofJX0v4M
Great piece, Ryan.
Some great quotable quotes in this one for me.
Can I clarify, is the difference you're getting at here fundamentally about the largely unconscious human ability to perform contextual reasoning? i.e. hoovering up masses of information, most of it with no involvement from the conscious mind, processing instantaneously, and then generating a kind of "gut feel" or fingerspitzengefühl (https://en.wikipedia.org/wiki/Fingerspitzengef%C3%BChl)? Or is it something more/different from that?
An aside, maybe this is semantics but... although I get the "alien intelligence" metaphor, I've become super wary of it. I feel like there's a kind of danger here, primarily because it ascribes a kind of agency/sentience/living-ness. I know that's not what you intend here (you even footnoted an explicit exclusion of the consciousness debate). But I wonder if we can get creative here? A collaborator and I have been playing with the metaphor of something neither living nor exactly non-living, and the idea of the virome came to mind, something that can bring both benefit and harm to human systems, can both enable and constrain in evolutionary terms. What do you think?