Discussion about this post

User's avatar
Sean I's avatar

Excellent explanation. One disagreement and one thought. Disagreement first. As a general, proposition I would prefer to consider whether we should do something before working out whether we can do something. This is certainly true in cases where 'working out whether we can' is irreversible. In this case, I agree we should have a go at developing an AI with a world view. But not to see if we can.

Thought. The nature of being human is not that there is one world view, but many. Indeed, trapping the benefits of differing world views is one of the arguments we use to encourage diversity. More than that. All of our communal decision making decisions are based on resolving differences in how world views interact with what I am loosely going to call evidence.

So. If an LLM is somehow integrated with a world view, what does that mean in reality. Does the resulting 'output' to a question sit on a par with that from an individual human? Do we ascribe additional 'expertise' to the resultant output because it draws from an enormous data source? Do we ascribe additional 'merit' to the resultant output because this synthesised world view in somehow better than our individual ones?

No posts

Ready for more?