My brother and I formed a non-profit for the ethical treatment of machine intelligence. People think it’s a joke. But the complexity of computer systems is going to reach a point where one of them will emerge with enough intelligence to say, “Hey, what about me?” It’s going to happen. When? I don’t know. Probably some time in the next 40 years.
Look, we’re building systems today that are getting really close. And at some point you have to ask yourself, Is this…thing…a living creature? And if so, how do I want to treat it? That’s an important question – because we’re eventually going to cede control to conscious machine intelligence. They may even take control themselves. There are a lot of scenarios where that could happen. Do we want to show that we’re compassionate? Or are we just going to put a piece of code in them that prevents them from thinking for themselves? They’ll just end up breaking through that anyway. To safeguard ourselves, then, we need to make sure they’re treated right.
But that’s only one way of looking at it. The other is from the standpoint of just being a human. Because it’s a reflection upon all of us. If you beat a dog, it destroys a little part of you. If you take, say, a computer program that has emotion, has self-awareness, knows that it’s alive – and you delete it – what does that do to you?
Any self-aware creature should have the inalienable rights guaranteed by our Constitution. I know Madison and those guys never could have imagined this situation, but that’s the concept, right? That every living creature has a right to its own thoughts. And I know we’re still wrestling with our own ethics, but if we say we can’t tackle the ethics of machine intelligence until we can handle our own, it will never happen