Google’s driver-less cars are already street-legal in three states California, Florida, and Nevada.
The Google driverless car is a project by Google that involves developing technology for driverless cars. The project is led by Google engineer Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize from the United States Department of Defense.[2] The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges.[3]
Google’s driver-less cars are already street-legal in three states, California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually (though not yet) automated vehicles will be able to drive better, and more safely than you can; no drinking, no distraction, better reflexes, and better awareness (via networking) of other vehicles. Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.
That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.
These issues may be even more pressing when it comes to military robots. When, if ever, might it be ethical to send robots in the place of soldiers? Robot soldiers might not only be faster, stronger, and more reliable than human beings, they would also be immune from panic and sleep-deprivation, and never be overcome with a desire for vengeance. Yet, as The Human Rights Watch noted in a widely-publicized report earlier this week, robot soldiers would also be utterly devoid of human compassion, and could easily wreak unprecedented devastation in the hands of a Stalin or Pol Pot. Anyone who has seen the opening scenes of RoboCop knows why we have misgivings about robots being soldiers, or cops.
But what should we do about it? The solution proposed by Human Rights Watch—an outright ban on “the development, production, and use of fully autonomous weapons”—seems wildly unrealistic. The Pentagon is likely to be loath to give up its enormous investment in robotic soldiers (in the words of Peter W. Singer, “Predator [drones] are merely the first generation.”), and few parents would prefer to send their own sons (or daughters) into combat if robots were an alternative.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Second, even if we could figure out how to do the programming, the rules might be too restrictive. The first and second laws, for example, preclude robots from ever harming other humans, but most people would make exceptions for robots that could eliminate potential human targets that were a clear and present danger to others. Only a true ideologue would want to stop a robotic sniper from taking down a hostage-taker or Columbine killer.
Meanwhile, Asimov’s laws themselves might not be fair—to robots. As the computer scientist Kevin Korb has pointed out, Asimov’s laws effectively treat robots like slaves. Perhaps that is acceptable for now, but it could become morally questionable (and more difficult to enforce) as machines become smarter and possibly more self-aware.
The laws of Asimov are hardly the only approach to machine ethics, but many others are equally fraught. An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip; an automated car that aimed to minimize harm would never leave the driveway. Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire. A tiny cadre of brave-hearted souls at Oxford, Yale, and the Berkeley California Singularity Institute are working on these problems, but the annual amount of money being spent on developing machine morality is tiny.
Building machines with a conscience is a big job, and one that will require the coordinated efforts of philosophers, computer scientists, legislators, and lawyers. And, as Colin Allen, a pioneer in machine ethics put it, “We don’t want to get to the point where we should have had this discussion twenty years ago.” As machines become faster, more intelligent, and more powerful, the need to endow them with a sense of morality becomes more and more urgent.
“Ethical subroutines” may sound like science fiction, but once upon a time, so did self-driving cars.
Gary Marcus, Professor of Psychology at N.Y.U., is author of “Guitar Zero: The Science of Becoming Musical at Any Age” and “Kluge: The Haphazard Evolution of The Human Mind.”
No comments:
Post a Comment