Artificial intelligence, artificial consciousness? David Chalmers discusses how AI could transform our place in the world. From the interview:

I think artificial general intelligence is possible. Some people are really hyping up A.I., saying that artificial general intelligence is just around the corner in maybe 10 or 20 years. I would be surprised if they turn out to be right. There has been a lot of exciting progress recently with deep learning, which focuses on methods of pattern-finding in raw data.

Deep learning is great for things we do perceptually as human beings — image recognition, speech recognition and so on. But when it comes to anything requiring autonomy, reasoning, decisions, creativity and so on, A.I. is only good in limited domains. It’s pretty good at playing games like Go. The moment you get to the real world, though, things get complicated….

Once we have a human-level artificial intelligence, there’s just no doubt that it will change the world. A.G.I.s are going to be beings with powers initially equivalent to our own and before long much greater than our own. To that extent, I’m on board with people who say that we need to think hard about how we design superintelligence in order to maximize good consequences….

I like to distinguish between intelligence and consciousness. Intelligence is a matter of the behavioral capacities of these systems: what they can do, what outputs they can produce given their inputs. When it comes to intelligence, the central question is, given some problems and goals, can you come up with the right means to your ends? If you can, that is the hallmark of intelligence. Consciousness is more a matter of subjective experience. You and I have intelligence, but we also have subjectivity; it feels like something on the inside when we have experiences. That subjectivity — consciousness — is what makes our lives meaningful. It’s also what gives us moral standing as human beings….

I value human history and selfishly would like it to be continuous with the future. How much does it matter that our future is biological? At some point I think we must face the fact that there are going to be many faster substrates for running intelligence than our own. If we want to stick to our biological brains, then we are in danger of being left behind in a world with superfast, superintelligent computers. Ultimately, we’d have to upgrade.