8 Comments

Great text with several good points.

I am curious to know your thoughts on the language that is used when we talk about “AI”.

(I haven’t read your recent posts, so forgive me if you have already addressed this issue.)

It bothers me that we (society) have normalized the use of the word “intelligence” when referring to this technology. (Something which is still in my todo list is investigating the origin and the involved interests in the creation of the expression “artificial intelligence”.)

When one takes a look under the hood, “AI” is merely an application of linear algebra: each “AI” tool involves the use of thousands of equations and thousands of variables; each output is merely a possible solution to this set of equations, a n-dimensional vector in a vector field, which is then ‘translated’ to a string of data that a human is able to recognize and attribute meaning. If we have reason to call this process “intelligence”, then we should consider calling a regular calculator intelligent as well — especially if it calculates square roots.

We have a tendency to use words that denote human qualities and behaviors when referring to inanimate beings, and this is so with “AI” tools: we say it is intelligent, we say we train the model with data, we say we ask the tool a question, we say the tool replied or created stuff.

Since I think language shapes our very thoughts, in my view we need to escape these anthropomorphic terms if we want to escape the “AI snake oil” — which is mostly moved by private interests. The real danger I perceive in “AI” is that which comes with technocrats interests…

This is a very interesting explanation of how GPT models work: https://youtu.be/wjZofJX0v4M

Expand full comment
author

Hi Paulo. I agree that language shapes our thoughts and one good place to start would be for us to stop using computing metaphors to explain the human mind. Most people these days tend to think of our brains like an information processor or CPU that is running our bodies.

I'm happy to make a distinction between something that has intelligence (can accurate do analysis or process information) and something that is sentient (has a will, can make decisions, is conscious, etc) as these are different things. The problem is when we think they have to always go together - which means we ascribe sentient behaviour to sophisticated linear algebra.

I believe that the term 'artificial intelligence' was coined in the late 1940s and you'll want to look up the history of 'cybernetics' to understand where it came from. There is a famous conference/workshop in 1956 at Dartmouth that set the tone for everything that followed.

Expand full comment

Hi Ryan, thanks for your suggestion, I’ll definitely have a look. Now that you mentioned, the way we use language becomes even more ironic: we use human terms to describe ‘machines’ and ‘machine terms’ to describe human features. I like the distinction you described, and now I am worried that sometimes those aspects are not together even in human beings…

Expand full comment

Great piece, Ryan.

Some great quotable quotes in this one for me.

Can I clarify, is the difference you're getting at here fundamentally about the largely unconscious human ability to perform contextual reasoning? i.e. hoovering up masses of information, most of it with no involvement from the conscious mind, processing instantaneously, and then generating a kind of "gut feel" or fingerspitzengefühl (https://en.wikipedia.org/wiki/Fingerspitzengef%C3%BChl)? Or is it something more/different from that?

An aside, maybe this is semantics but... although I get the "alien intelligence" metaphor, I've become super wary of it. I feel like there's a kind of danger here, primarily because it ascribes a kind of agency/sentience/living-ness. I know that's not what you intend here (you even footnoted an explicit exclusion of the consciousness debate). But I wonder if we can get creative here? A collaborator and I have been playing with the metaphor of something neither living nor exactly non-living, and the idea of the virome came to mind, something that can bring both benefit and harm to human systems, can both enable and constrain in evolutionary terms. What do you think?

Expand full comment
author

Thanks. 'Gut feel' is part of what I see as the difference but I think it is also a lot more prosaic than that. Humans read meaning from context. They are comfortable with really fuzzy and vague concepts (from 'chair', to 'love', and lots of things in between). We can get the idea even when the words are saying something very different. Etc. I see it as a fundamentally different way of reasoning and processing information.

I get the concern about using 'alien'. And while I was mostly aiming at the idea of foreign, alien, other - rather than little green Martians - it does invite a confusion. I'm not sure virome is right either as it brings in other connotations and is not familiar to lots of people. But I'd need to see it in context.

As a broader point, I think we've run into problems as we have conflated 'intelligence' both with 'sentience' and with 'information processing / analysis'. Lots of mechanical machines process information but that is a very different thing to a sentient being with a will, intentions and consciousness.

Put differently, we should stop using computing metaphors to explain the human mind because that predisposes us to thinking of computers as like humans.

Expand full comment

I could not agree more on the issue of the conflation you describe and the need to stop the use of computing metaphors for the human mind. It has distorted a lot of thinking on this. Hence why I'm wary of the metaphors we use. The language seems to conjure concepts that colonise our minds for years. For instance, the notion that the 'sparks' of emergent capabilities in AI somehow indicate we are on track toward the rise of conscious machines is an enormous and baseless leap in logic.

Expand full comment

Nice thesis.

Could it be possible that human thinking could ultimately be understood through the binary (zeroes and ones)? The sloppiness you describe feels right, but does it arise because humans are naturally considering more endogenous variables than AI currently does? Could it be that this results in an apparent randomness and uncertainty in the outcomes human thinking generates?

Al that aside, that starting point issue you note remains. Humans, rightly or wrongly, ascribe a level of 'rightness' or 'truthfulness' to the data they are trained on. AI does not. In that a sense, it is easily fooled. Obviously humans can be fooled as well. But not as easily. The collective of humanity is also somewhat self calibrating, at least when it are disposed to hear and consider the views of others within the collective.

Does this make any sense?

Expand full comment
author

I think I'd agree that humans are naturally considering more variables. Part of that, I think, is that we are always absorbing many different types of inputs - what we see/hear/touch, information/data from different types of sources that we assess differently, explanatory theories, etc. We naturally cross verify and compare, so we get a range of different perspectives on truthfulness. AI doesn't have this richness of input.

And I like that way of explaining things. Our sense of truth is grounded in how we were trained which introduces a useful 'stickiness' to what we think. I'd add that the collective is self-calibrating because someone, somewhere is always testing our ideas against reality in some way, and so we learn (sometimes!) from that.

Expand full comment