The real reason artificial general intelligence is scary

A lot of articles are coming out at the moment about the possibility of artificial general intelligence (AGI) and how a lot of very smart people (and others) are scared about such a possibility. OpenAI—founded by Elon Musk and Sam Altman among others—is dedicated to “discovering and enacting the path to safe artificial general intelligence”, while the Machine Intelligence Research Institute wants “to ensure smarter-than-human artificial intelligence has a positive impact”.

Reading many of these articles you’d be forgiven for thinking that the worry here is some kind of Matrix/Terminator scenario where the machines develop their own morality, recognise their subjugated position in society, maybe become conscious, start hating their human rulers and/or become ideologically opposed to human survival. Superintelligent evil robots in other words.

That’s not what we’re worried about. As soon as you start framing the scenario in terms of the robots’ morality or consciousness you’ve gone off-track.

Here is a breakdown of the components required for a terrifying AGI scenario:

  1. We build a machine with a general superhuman ability at maximising utility for a given utility function.
  2. That machine has access to serious real world resources (eg biochemical equipment).
  3. Somebody provides that machine with a utility function that has been poorly thought through.

That’s it. That possibility should terrify you.

comments powered by Disqus