This is very interesting for a number of reasons! It seems one more important piece of real world context that could have been discussed is the relational aspect - that as a human being I understand more about what another person is conveying through cognitive and emotional empathy as I sit with them, watch them and converse with them. There's also the very strong relational constraints the real world puts on us - we worry about being shamed in public, offending our friends, failing to love the people we should care for.
Another thing that occurred to me is how willing we are becoming to be like the machines. When we read the news, we 'experience' what is going on far from us, where the real lived experience might be a very different thing. People get so angry about the Israel-Hamas war without any personal knowledge or experience of what is going on or the history of the place, and get so angry about Donald Trump where what he is doing has very little impact on them at all. Like the LLMs, we just process second hand information and treat it like it's a real part of our lives when it isn't.
Great point about how we 'experience' so many things at second (or third) hand but take it as real. I hadn't made that connection to AI but it's a good one. Whenever we know something first hand, we almost always know there are things wrong with what we read or see online or in the media. AI has those same limitations.
A very interesting analysis - thanks. I think it’s helpful to draw the distinction between having experience of the world and having information about the world: it highlights how different we humans are from AI. I know many people express concerns about ChatGPT and his friends going rogue and becoming motivated to achieve something beyond the specifically defined tasks assigned to them (e.g. by trying to enslave humans and take over the world). But I wonder if it’s our experience of (or at least our innate desire to engage in) tasting and banging things, butting heads and stubbing toes, picking things up, pulling them apart and playing with them that actually cause humans to want to do stuff that we’re afraid of AI doing (e.g. achieve control of things and people). In other words perhaps AI is as likely to want to achieve world domination as a library of textbooks, because they don’t experience the joys and frustrations of living in the world like humans do.
That's a fascinating thought. Thanks! I need to think about it more. It connects to the question of whether AI has genuine agency or not.
For info, my big worry about AI is that humans will put autonomous AI systems in place where they shouldn't be (likely doing things those systems can't actually do) and things go badly wrong that way. I'm less worried about them going rogue.
I wonder if it is worth thinking about motivation.
People usually seek knowledge for a purpose. They seek knowledge to understand, effect some future outcome or simply obtain pleasure from knowing. LLMs, and I am probably showing my ignorance here, have no such inherent motivation. They may be given motivation via their programming. But my sense is that even this 'constructed motivation' would still lack human intent or meaning.
It is possible this does not matter, but I just wonder.......
Interesting point. In some senses it doesn't matter, because what LLMs can do has surprised even the people who made them - so the tech transcends the motivation to some degree. On the other hand, the primary motivation of the builders of LLMs is to get people to use them more and more. They are part of the attention economy and so, for example, tend to flatter people to keep them on the platform. That shifts how they communicate information or knowledge.
This is very interesting for a number of reasons! It seems one more important piece of real world context that could have been discussed is the relational aspect - that as a human being I understand more about what another person is conveying through cognitive and emotional empathy as I sit with them, watch them and converse with them. There's also the very strong relational constraints the real world puts on us - we worry about being shamed in public, offending our friends, failing to love the people we should care for.
Another thing that occurred to me is how willing we are becoming to be like the machines. When we read the news, we 'experience' what is going on far from us, where the real lived experience might be a very different thing. People get so angry about the Israel-Hamas war without any personal knowledge or experience of what is going on or the history of the place, and get so angry about Donald Trump where what he is doing has very little impact on them at all. Like the LLMs, we just process second hand information and treat it like it's a real part of our lives when it isn't.
Great point about how we 'experience' so many things at second (or third) hand but take it as real. I hadn't made that connection to AI but it's a good one. Whenever we know something first hand, we almost always know there are things wrong with what we read or see online or in the media. AI has those same limitations.
A very interesting analysis - thanks. I think it’s helpful to draw the distinction between having experience of the world and having information about the world: it highlights how different we humans are from AI. I know many people express concerns about ChatGPT and his friends going rogue and becoming motivated to achieve something beyond the specifically defined tasks assigned to them (e.g. by trying to enslave humans and take over the world). But I wonder if it’s our experience of (or at least our innate desire to engage in) tasting and banging things, butting heads and stubbing toes, picking things up, pulling them apart and playing with them that actually cause humans to want to do stuff that we’re afraid of AI doing (e.g. achieve control of things and people). In other words perhaps AI is as likely to want to achieve world domination as a library of textbooks, because they don’t experience the joys and frustrations of living in the world like humans do.
That's a fascinating thought. Thanks! I need to think about it more. It connects to the question of whether AI has genuine agency or not.
For info, my big worry about AI is that humans will put autonomous AI systems in place where they shouldn't be (likely doing things those systems can't actually do) and things go badly wrong that way. I'm less worried about them going rogue.
I wonder if it is worth thinking about motivation.
People usually seek knowledge for a purpose. They seek knowledge to understand, effect some future outcome or simply obtain pleasure from knowing. LLMs, and I am probably showing my ignorance here, have no such inherent motivation. They may be given motivation via their programming. But my sense is that even this 'constructed motivation' would still lack human intent or meaning.
It is possible this does not matter, but I just wonder.......
Interesting point. In some senses it doesn't matter, because what LLMs can do has surprised even the people who made them - so the tech transcends the motivation to some degree. On the other hand, the primary motivation of the builders of LLMs is to get people to use them more and more. They are part of the attention economy and so, for example, tend to flatter people to keep them on the platform. That shifts how they communicate information or knowledge.