2 Comments
User's avatar
Simon Matthews's avatar

Thanks Ryan, very interesting as always.

To try to get my head around the idea of AI having beliefs I started by thinking about whether your statement that “humans do believe things” applies to human babies. I think it does. New-born babies probably believe that, for example, it is better to be held than to be left alone on the floor and, if breast feeding, that mothers are more desirable than fathers. These beliefs are an accurate (and, from an evolutionary perspective, soundly based) description of reality for the baby that have presumably been instilled through millions of years of mammalian evolution because of their tendency to promote survival. Less easy to explain are more complex beliefs that develop as we grow: for example that vanilla ice cream tastes better than chocolate or that Wagner’s operas are better than Puccini’s. In these cases belief is a synonym for having a preference for something, but it’s not a misuse of language to call them beliefs and, unless I have misunderstood it, also fits within your definition.

That got me thinking that a lot of our beliefs are simply personal preferences, or at least rooted in them. We’re rarely wholly dispassionate when it comes to beliefs we hold about all but the most mundane matters. This could explain why some beliefs might be quite irrational even though genuinely held (for example I might believe in astrology because I prefer to view events as being guided by the stars rather than happenstance, or that a rich Nigerian prince really does want to send me money because I prefer to anticipate the receipt of money than admit that I’ve been scammed, or that my spouse really is faithful despite overwhelming evidence to the contrary simply because I prefer to think of my marriage as a stable one). When someone says “I choose to believe” something, they’re really stating a personal preference for believing that thing over not believing it.

Obviously we also have beliefs that are grounded in something other than our personal preferences—such as belief in the laws of gravity and mathematics. It might be said that AI “believes” in scientific and mathematical laws (because it has been programmed to), but how can it be said to have beliefs about all the things we believe because of personal preference or other feelings generated by our living in the world? Presumably all babies’ beliefs are (at least initially) grounded solely in direct experience of the world, and as we grow older that direct experience still operates on us to form many of our beliefs — but that’s entirely lacking for AI. So there is at least a significant subset of beliefs that humans naturally hold that AI can’t and we’d probably be better to find a different term to describe the types of belief AI might conceivably have — axiomatic truths and scientific facts which are deliberately included in its dataset perhaps.

On propositional attitudes, I may have misunderstood the definition but I don’t think I agree with your statement that “if you think that the human mind is simply the mechanical firing of neurons in the brain, then consciousness and propositional attitudes have no basis in reality and there is no reason why AI systems are not like humans”. Suppose I’m walking in the jungle and my companion suddenly grabs my arm and whispers to me the proposition that there is a tiger stalking us. If, prompted by my senses, the neurons in my brain fire in such a way that I think/feel/fear that there is in fact a tiger sneaking up on us — and then it pounces — clearly there was a basis in reality for my propositional attitude. I can’t imagine how a disembodied AI could have an equivalent thought/feeling/fear.

Thus I think I can agree that ascribing beliefs to AI systems is a category error even if I don’t share your view that the human mind is not reducible to the firing of neurons in the brain.

Expand full comment
Ryan Young's avatar

Thanks for the well thought through comment. I'll just respond to one aspect of it here, as I clearly over-simplified to get things into a blog length format.

One way of thinking about propositional attitudes is to use Nagel's question of 'what is it like to be a bat?'. The question about whether an AI has propositional attitudes is very similar to the question of whether there is an answer to the question "what is it like to be an LLM?". If there is a genuine subjective experience of 'being an LLM' then it would make sense to say LLMs have propositional attitudes.

Nagel's point is that, if there is a subjective experience, then it isn't something that can be explained purely in terms of the underlying physical structures - because that would be the wrong type of explanation. So it is possible to believe that the only things that exist are physical matter but also that the human mind isn't reducible to the firing of neurons in the brain - as the latter is the wrong type of thing to explain a mind.

That may not really make sense, so I'll think about how to explain where I have more space.

Expand full comment