A well presented case for the negative. Two conclusions: (1) computers are not people; and (2) an individual computer cannot 'think' like an individual human.
For me, another question feels important: can a group of computers think like a group of humans. A key part of human thinking is the ability for different people to come up with different response to the same scenarios. Each scenario creates an, often wide, distribution of responses. It is only in the movies that a single abductive detective reveals a single complete answer. In real life, it is more likely that fifty abductive detectives would come up with 50 different responses (some overlapping in content, and some not). Even if a single computer could (in effect, if not in process - which I appreciate is part of, if not the whole, point) come up with an response replicating that of a single human, it is wholly unlikely that a group of computers would ever come up with the range of responses that occurs naturally to a group of humans.
A key rationale for using 'computer thinking' is to narrow the distribution of outcomes from the thinking process. This is entirely unlike thinking like a human.
That is a brilliant observation about group patterns of thinking. If we give 50 people a question and get the same answer every time we'll think something is wrong and they have been brainwashed. If we give 50 computers the same question and get different answers, there is a programming error somewhere.
It opens up some interesting and more practical questions too. The rationale for, say automated cars or drone swarms, is that all the machines 'think' in the same way and therefore can work better and more efficiently together. In some contexts that is highly advantageous, but when that same process starts to happen in humans, we call it groupthink and often see it as an obstacle to good outcomes. When is narrowing the thinking styles useful, and when does it prevent us from dealing with the unexpected in the real world? Are there situations where we need to deliberately build in variability into automated processing to boost flexibility and adaptivity?
A well presented case for the negative. Two conclusions: (1) computers are not people; and (2) an individual computer cannot 'think' like an individual human.
For me, another question feels important: can a group of computers think like a group of humans. A key part of human thinking is the ability for different people to come up with different response to the same scenarios. Each scenario creates an, often wide, distribution of responses. It is only in the movies that a single abductive detective reveals a single complete answer. In real life, it is more likely that fifty abductive detectives would come up with 50 different responses (some overlapping in content, and some not). Even if a single computer could (in effect, if not in process - which I appreciate is part of, if not the whole, point) come up with an response replicating that of a single human, it is wholly unlikely that a group of computers would ever come up with the range of responses that occurs naturally to a group of humans.
A key rationale for using 'computer thinking' is to narrow the distribution of outcomes from the thinking process. This is entirely unlike thinking like a human.
That is a brilliant observation about group patterns of thinking. If we give 50 people a question and get the same answer every time we'll think something is wrong and they have been brainwashed. If we give 50 computers the same question and get different answers, there is a programming error somewhere.
It opens up some interesting and more practical questions too. The rationale for, say automated cars or drone swarms, is that all the machines 'think' in the same way and therefore can work better and more efficiently together. In some contexts that is highly advantageous, but when that same process starts to happen in humans, we call it groupthink and often see it as an obstacle to good outcomes. When is narrowing the thinking styles useful, and when does it prevent us from dealing with the unexpected in the real world? Are there situations where we need to deliberately build in variability into automated processing to boost flexibility and adaptivity?