Discussion about this post

User's avatar
Sean I's avatar

Loved this. I wonder if another example might involve how we 'choose' what is right and what is wrong. For the great majority of humans, my inexpert impression (though I may be wrong) some basic rights and wrongs seem to be hardwired. For an AI, rights and wrongs must be derived in some way mathematically. Our fascination as humans for evil, results in us writing many words on acts and behaviours that most of us consider evil but which occur relatively rarely in society. Yet for an AI these are simply words that can be used in deriving the 'next most likely'. I guess my question is that, without human guidance, does this weight what AI produces towards a more 'evil' response than a true reflection of human behaviour?

2 more comments...

No posts

Ready for more?