4 Comments

Well put. Speaking of other fields: I work on software engineering and measuring the candidates' abilities in recruitment has long been a hard problem. (Google did some interesting research showing that nothing seemed to work very well...) But the ability to write a clear CV and cover letter with good grammar and that addressed the job ad has been used as a proxy for communication skills and I think that is on the edge of obsolescence due AI. The ability to do short coding puzzles is another key proxy and doing them offline (as "homework") became increasingly popular both due to Covid and other issues with doing it as an in person exam. Just last week I wrote a recruitment challenge and was quite disturbed how well ChatGPT did with it. The LLM coding is long way from good enough to actually replace the engineer but it's hard to work out what the new proxy could be.

Expand full comment
author

Thanks. That's a really nice example of the dynamics I was thinking of. As I see it, the challenge, as for academia, is to figure what the skills we really need people to have are - so we can see if we can identify them or select for them. But these things aren't straightforward.

Failing that, I guess we can try to find quirky questions or prompts that we know gen AI will not do well at. But these may not correlate with the skills we want either!

Expand full comment

Now be honest. Could it be that you had some artificial help writing this piece? Either way, an excellent analysis. Your phrase 'distinctive human intelligence' is worthy of a deeper dive sometime in the future. Of course, intelligence is not the only 'human' trait we value.

I wonder also if AI is reducing what Daniel Kahneman called noise (or to use another word, diversity) in output. This has a positive side which is obvious - outputs are, on average, better. This is largely because they are more consistently a bit better than average. There is, however, in my mind anyway a negative. My argument would be that knowledge creation requires diversity. Good knowledge building systems are able to identify things that are both novel and better (or truer if you prefer). For this to happen, you need outputs that challenge the average rather than simply propagate it. Otherwise, we are accepting that best lies somewhere in a tight distribution around the average.

All of this raises a question for me around the difference between exploration and application. I am wondering whether, as humans, we need to identify more clearly where we are seeking to apply existing knowledge (even in writing an essay) or explore new knowledge.

Expand full comment
author

Does using a computer, the internet, spell checking and the like count as artificial help?

The point about noise is a good one and probably a useful rule of thumb for when we should be happy using AI. If we are after consistent outputs that are good enough, then AI should do the job. If we are chasing excellence (which requires us to accept more failure), then it won't be useful. This reminds me of the distinction some people make between 'strong link' and 'weak link' problems. (I'd explain except this comment will turn into an essay!)

I'm not sure the distinction between exploration and application is always clear enough to make the distinction. But we are biased towards producing something new even though applying existing knowledge is often as, or more, valuable.

I completely agree that intelligence is one of many human traits that we value, and probably not even the most important one. But there is a long history of ideas in which we have defined humans as 'rational animals' or 'thinking beings' of some sort, so the assumption has been that intelligence is what makes us unique. I'm not convinced this assumption holds. Maybe another idea for a future post!

Expand full comment