Ray Kurzweil on Turing Tests, Brain Extenders and AI Ethics
Inventor and author Ray Kurzweil, who currently runs a group at Google writing automatic responses to your emails in cooperation with the Gmail team, recently talked with WIRED Editor-in-Chief Nicholas Thompson at the Council on Foreign Relations. Here’s an edited transcript of that conversation.
Nicholas Thompson: Let’s begin with you explaining the law of accelerating returns, which is one of the fundamental ideas underpinning your writing and your work.
Ray Kurzweil: Halfway through the Human Genome Project, 1 percent of the genome had been collected after seven years. So mainstream critics said, "I told you this wasn’t gonna work. You’re at seven years, 1 percent; it’s going to take 700 years just like we said." My reaction at the time was: "Wow we finished 1 percent? We’re almost done." Because 1 percent is only seven doublings from 100 percent. It had been doubling every year. Indeed, that continued. The project was finished seven years later. That’s continued since the end of the genome project—that first genome cost a billion dollars and we’re now down to $1,000.
I’ll mention just one implication of the law of accelerating returns because it has many ripple effects—and it’s really behind this remarkable digital revolution that we see—is a 50 percent deflation rate in information technology. So I can get the same computation, communication, genetic sequencing, and brain data as I could a year ago for half the price today. That’s why you can buy an iPhone or an Android phone that’s twice as good as the one two years ago for half the price. You put some of the improved price performance into price and some of it into performance. So when this girl in Africa buys a smartphone for $75, it counts as $75 of economic activity, despite the fact that it’s literally a trillion dollars of computation circa 1960, a billion dollars circa 1980. It’s got millions of dollars in free information apps, just one of which is an encyclopedia far better than the one I saved up for years as a teenager to buy. All that counts for zero in economic activity because it’s free. So we really don’t count the value of these products.
All of that is going to change: We’re going to print out clothing using 3-D printers. Not today; we’re kind of in the hype phase of 3-D printing. But the 2020s, early 2020s, we’ll be able to print out clothing. There will be lots of cool, open-source designs you can download for free. We’ll still have a fashion industry, just like we still have a music and movie and book industry, coexisting with free, open-source products, which are great levelers and proprietary products. We’ll be able to create food very inexpensively using vertical agriculture: using hydroponic plants for fruits and vegetables, in-vitro cloning of muscle tissue for meat. The first hamburger to be produced this way has already been consumed. It was expensive, it was a few hundred thousand dollars, but it was very good. But that’s research costs. All of these different resources are going to become information technologies. A building was put together recently, as a demo, using little modules snapped together Lego-style, printed out of 3-D printers in Asia, put together a three-story office building in a few days. That’ll be the nature of construction in the 2020s. 3-D printers will print out the physical things we need.
NT: Let’s talk about intelligence, like the phone in my pocket. It’s better than I am at math. It’s better than I am at Go. It’s better than I am at a lot of things. When will it be better than me at holding a conversation? When will the phone interview you instead of me?
RK: We do have technologies that can have conversations. My team at Google created smart replies, as you know. So we’re writing millions of emails. And it has to understand the meaning of the email it’s responding to even though the proposed suggestions are brief. But your question is a Turing-equivalent question—it’s equivalent to the Turing Test. And I’m a believer that the Turing Test is a valid test of the full range of the human intelligence. You need the full flexibility of human intelligence to pass a valid Turing Test. There’s no simple Natural Language Processing trick you can do to do that. If the human judge can’t tell the difference then we consider the AI to be of human intelligence, which is really what you’re asking. That’s been a key prediction of mine. I’ve been consistent in saying 2029. In 1989, in The Age of Intelligent Machines, I bounded that between early-2020s and late-2030s; In The Age of Spiritual Machines in ’99 I said 2029. The Stanford AI department found that daunting, so they held a conference and the consensus of AI experts at that time was hundreds of years. Twenty-five percent thought it would never happen. My view, and the consensus view or the median view of AI experts have been getting closer together, but not because I’ve been changing my view.
In 2006, there was a Dartmouth conference called AI@50. And the consensus then was 50 years; at that time I was saying 23 years. We just had an AI ethics conference at Asilomar, and the consensus there was around 20 to 30 years, and I’m saying, at that time, 13. I’m still more optimistic, but not by that much, and there’s a growing group of people that think I’m too conservative.
A key Issue I didn’t mention with the law of accelerating returns is: not only does the hardware progress exponentially, but so does the software. I’m feeling more and more confident, and I think the AI community is gaining confidence that we’re not far off from that milestone.
We’re going to literally merge with this technology, with AI, to make us smarter. It already does. These devices are brain extenders and people really think of it that way, and that’s a new thing. People didn’t think of their smartphones that way just a few years ago. They’ll literally go inside our bodies and brains, but I think that’s an arbitrary distinction. Even though they’re outside our bodies and brains, they’re already brain extenders, and they will make us smarter and funnier.
NT: Explain the framework for policymakers and how they should think about this accelerating technology, what they should do, and what they should not do.
RK: There has been a lot of focus on AI ethics, how to keep the technology safe, and it’s kind of a polarized discussion like a lot of discussions nowadays. I’ve actually talked about both promise and peril for quite a long time. Technology is always going to be a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. These technologies are much more powerful. It’s also a long discussion, but I think we should go through three phases, at least I did, in contemplating this. First is delight at the opportunity to overcome age-old afflictions: poverty, disease, and so on. Then alarm that these technologies can be destructive and cause even existential risks. And finally I think where we need to come out is an appreciation that we have a moral imperative to continue progress in these technologies because, despite the progress we’ve made—and that’s a-whole-nother issue, people think things are getting worse but they’re actually getting better—there’s still a lot of human suffering to be overcome. It’s only continued progress particularly in AI that’s going to enable us to continue overcoming poverty and disease and environmental degradation while we attend to the peril.
And there’s a good framework for doing that. Forty years ago, there were visionaries who saw both the promise and the peril of biotechnology, basically reprogramming biology away from disease and aging. So they held a conference called the Asilomar Conference at the conference center in Asilomar, and came up with ethical guidelines and strategies—how to keep these technologies safe. Now it’s 40 years later. We are getting clinical impact of biotechnology. It’s a trickle today, it’ll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. It’s a good model for how to proceed.
We just had our first Asilomar conference on AI ethics. A lot of these ethical guidelines, particularly in the case of, say, biotechnology have been fashioned into law. So I think that’s the goal. It’s the first thing to understand. The extremes are all, "Let’s ban the technology," or "Let’s slow it down." That’s really not the right approach. Let’s guide it in a constructive manner. There are strategies to do that, that’s another complicated discussion.
NT: You can imagine some rules that Congress could say that everyone working on a certain kind of technology has to make her data open, for example, or has to be willing to share his data sets and at least to allow competitive markets over these incredibly powerful tools. You can imagine the government saying, "Actually, there’s going to be a big government-funded option that we’re going to have, kind of like OpenAI, but run by the government." You can imagine a huge national infrastructure movement to build out this technology so at least people with public interest at heart have control over some of it. What would you recommend?
RK: I think open-source data and algorithms in general are a good idea. Google put all of its AI algorithms in the public domain with TensorFlow, which is open source. I think it’s really the combination of open source and the ongoing law of accelerating returns that will bring us closer and closer to the ideals. There are lots of issues, such as privacy, that are critical to maintain, and I think people in this field are generally concerned about these issues. It’s not clear what the right answers are. I think we want to continue the progress, but when you have so much power, even with good intentions there can be abuses.
NT: What worries you? Your view of the future is very optimistic. But what worries you?
RK: I’ve been accused of being an optimist, and you have to be an optimist to be an entrepreneur because if you knew all the problems you’d encounter you’d probably never start any project. But I have, as I say, been concerned and written about the downsides, which are existential. These technologies are very powerful and so I do worry about that, even though I’m an optimist. And I am optimistic that we’ll make it through. I’m not as optimistic that there won’t be difficult episodes. World War II, 50 million people died and that was certainly exacerbated by the power of technology at that time. I think it’s important though for people to recognize that we are making progress. There was a poll taken of 24,000 people in 26 countries recently. It asked, "Has poverty worldwide gotten better or worse?" Ninety percent said, incorrectly, that it’s gotten worse. Only one percent said correctly that it’s fallen by 50 percent or more.
NT: What should the people in the audience do about their careers? They’re about to enter a world where the career choices are career choices mapped onto a world with completely different technology. So in your view, what advice to give the people in this room?
RK: Well it really is an old piece of advice, which is to follow your passion because there’s really no area that’s not going to be affected or that isn’t a part of this story. We’re going to merge with simulated neocortex in the cloud. So again we’ll be smarter. My view is not that AI is going to displace us. It’s going to enhance us. It does already. Who can do their work without these brain extenders that we have today. And that’s going to continue to be the case. People say, "Well, only the wealthy are going to have these tools," and I say, "Yeah, like smartphones, of which there are three billion." I was saying two billion, but I just read the news and it’s about three billion. It’ll be six billion in a couple of years. That’s because of the fantastic price-performance explosion. So find where you have a passion. Some people have complex passions that are not easily categorized, so find a way of contributing to the world where you think you can make a difference. Use the tools that are available. The reason I came up with the law of accelerating returns literally is it was to time my own technology projects so I could start them a few years before they were feasible—to try and anticipate where technology is going. People forget where we’ve come from. Just a few years ago, we had little devices that looked like your smartphone, but they didn’t work very well. So that revolution, and mobile apps, for example, hardly existed five years ago. The world will be comparably different in five years, so try to time your projects to meet the train at the station.
Audience Question: So much of the emphasis has been on the lovely side of human nature, on science and exploration, and I’m curious about the move more toward our robot partners. What about the dark side? What about war and war machines and violence?
RK: We’re learning a lot about how these platforms can be used to amplify all kinds of humans inclinations and be manipulated, and a lot of this is fairly recent information that we’re learning. So AI learns from examples. There’s a motto in the field that life begins at a billion examples and the best way to get examples is to learn from people, so AI very often learns from people. Not always. AlphaGo Zero just learned from itself by playing Go games against itself, but that’s not always feasible particularly when you’re trying to deal with more complex, real-world issues. There’s a major effort in the field, it’s going on in all the major companies and in open-source research as well, to de-bias AI because it’s going to pick up biases from people if it’s learning from people and people have biases, so to overcome gender bias and racial bias. That can actually be a goal. As humans, we pick up biases from all the things we’ve seen, a lot of it’s subconscious. We then learn, as educated humans, to recognize bias and try to overcome it, and we can have conflicts within our own mind. There’s a whole area of research to de-bias AI and to overcome the biases that they pick up from people. So that’s one type of research that can overcome problems with machine intelligence. In these ways, machine intelligence can be actually less biased than the humans that it learned from. Overall though, despite all the promise and peril that’s intertwined in social media, it’s overall been a very beneficial thing. I walk through airports and every child over the age of two is on their devices. It’s become a world community and I think the generation now growing up, moreso than any other generation, feels that they are citizens of the world because they’re really in touch with all the cultures of the world.
NT: In the last year in this country, we have not grown closer to the rest of the world, and a lot of people would say our democracy has not gotten better. Is this a blip in the ongoing continuing progress and mankind coming together, or are many people misinterpreting it?
RK: The polarization in politics in the United States, and in other places in the world, is unfortunate. I don’t think it is an issue for the kinds of things that we’ve been talking about today. I mean we’ve had major blips in the world. World War II was a pretty big blip and actually didn’t affect these trends at all. There may be things that we don’t like in certain government officials, or the government. But there’s a discussion. We’re not in a totalitarian era where you can’t voice your views. I’d be more concerned if we moved in that direction, but I don’t see that happening. So not to diminish the importance of government and who’s in power and so forth, but it’s at a different level. The kinds of issues we’re talking about are not really affected by these issues. There are existential risks that I worry about because technology is a double-edged sword.
Audience Question: My question is about inequality. There are a lot of phases through most of human history where economic inequality is quite high. I’m wondering whether you think the 20th century is an anomaly in that sense, and how the diffusion of technology is going to impact that inequality.
RK: Economic equality is getting better. Poverty in Asia has fallen by over 90 percent, according to the World Bank, in the last 20 years They’ve gone from primitive agrarian economies to thriving information economies. Africa and South America have growth rates that are substantially higher than the developed world. Any snapshot you take, there’s inequality, but it’s dramatically moving in the right direction. Poverty worldwide has fallen 50 percent in the last 20 years. And there’s many other measures of that. People say "the digital divide," but no. The internet, smartphones, are very strong in Africa. That’s a change in just the past few years. So we’re moving in the right direction. At any one point in time there’s grave inequality and people are suffering, but the numbers are growing in the right direction.
Audience Question: I hear from your remarks that you’re making a prediction that artificial general intelligence is 12 years out, and you’ve mentioned a couple times that notwithstanding your optimism you are concerned somewhat about existential risks, so I was wondering if you could elaborate a little bit more about what you mean by that, and what is the most important thing you think technologists should be doing to reduce those risks?
RK: I mean existential risks are risks that threaten the survival of our civilization. So the first existential risk that humanity ever faced was nuclear proliferation. We have had the ability to destroy all of humanity some number of times over. With these new technologies, it’s not hard to come up with scenarios where they could be highly destructive and destroy all of humanity. Biotechnology for example. We have the ability to reprogram biology away from disease. Immunotherapy, which is a very exciting breakthrough in cancer—I think it’s going to be quite revolutionary, it’s just getting started—it’s reprogramming the immune system to go after cancer, which it normally doesn’t do. But bioterrorists could reprogram a virus to be more deadly and more communicable and more stealthy and create a superweapon. And that was the specter that spawned the first Asilomar conference 40 years ago. And there have been these recurring conferences to make these ethical guidelines and safety protocols and strategies more sophisticated, and so far it’s worked. But we keep making the technology more sophisticated, so we have to reinvent them over and over again. We just had our first Asilomar conference on AI ethics. We came up with a set of ethics which we all signed off on. A lot of them are somewhat vague. I think it’s an important issue to give a high priority to. We’re finding we have to build ethical values into software. A classic example is the self-driving car. The whole motive for self-driving cars is they’ll eliminate 99 percent of the 2 million deaths from human drivers, but it will get into a situation where it has to make an ethical decision: Should it drive toward the baby carriage or toward the elderly couple or toward the wall and perhaps kill your passenger. Do you have an ethical guideline to not kill your passenger who might own you? You can’t send an email to the software designers in that circumstance and say, "Gee, what do I do?" It’s got to be built into the software. So those are practical issues and there’s a whole area of AI ethics growing over this.
But how do we deal with the more existential risks: weaponization of AI, which is not something in the future. Defense departments all over the world have been applying AI. There was a document going around asking people to agree to ban autonomous weapons, which sounds like a good idea, and the example that’s used is "We banned chemical weapons, so why not autonomous AI weapons." It’s a little more complicated because we could get by without anthrax and without smallpox. It’s okay to just ban them. But an autonomous weapon is a dual-use technology. The Amazon drone that’s delivering your frozen waffles or medicine to a hospital in Africa could be delivering a weapon. It’s the same technology, and kind of the horse is already out of the barn. Which is just to say that it’s a more complicated issue, how to deal with that. But the goal is to reap the promise and control the peril. There are no simple algorithms. There’s no little sub-routine that we can put into our AIs, "Okay, put this subroutine in. It’ll keep your AIs benign." Intelligence is inherently uncontrollable. My strategy, which is not fool-proof, is to practice the kind of ethics and morality and values we’d like to see in the world in our own human society. Because the future society is not some invasion from Mars of intelligent machines. It is emerging from our civilization today. It’s going to be an enhancement of who we are. So if we’re practicing the kind of values that we cherish in our world today, that’s the best strategy to have a world in the future that embodies those values.
via Wired http://bit.ly/2tZdTlN
November 13, 2017 at 07:12AM