PROVIDENCE, R.I. [Brown University] — From drones to driverless cars, robotic technologies are sure to become ever more present in our daily lives. What’s less sure is the impact that those technologies might have on society. Predictions run the gamut from paradise (robots will do our drudgery) to perdition (robots will take all our jobs and take over the world).
Last week, scholars from around the world gathered to discuss those and other scenarios at Brown’s first Societal Implications of Robotics Symposium. Two keynote talks by Illah Nourbakhsh of the Robotics Institute at Carnegie Mellon University and Bill Smart of the Oregon State University bookended three panel discussions with small groups of leading robotics researchers, economists, philosophers, psychologists, legal scholars, and even representatives of funding agencies.
The symposium was sponsored by Brown’s Humanity-Centered Robotics Initiative and organized by two of the initiative’s co-leaders, Michael Littman, professor of computer science, and Bertram Malle, professor of cognitive, linguistic, and psychological sciences.
The purpose wasn’t to answer all the questions surrounding our robotic future, Littman and Malle said. Rather, the idea was to seriously discuss those questions among thinkers from diverse fields and perspectives. Ultimately, Littman and Malle hope that these kinds of discussions might help guide robotics toward a future more beneficial to society.
Littman and Malle shared some of their thoughts following the symposium. More of Littman and Malle’s writings on the future of robotic and artificial intelligence are available at Livescience.com (“Rise of the Machines is not a likely future’ and “How to raise a moral robot”) and at Footnote1.com (“Teaching-robots-to-behave-ethically”).
Michael Littman
Department of Computer Science
“The morning keynote emphasized that advances in robotics are likely to accelerate and exacerbate the problem of income inequality. I found that idea very unsettling — the more we make it possible for capital (things that can be owned) to play the role of labor (work that people can do), the less economic power individual workers have. The topic came up again in each of the panel sessions — the first dealt with economics, the second with ethics, and the third with threats or negative outcomes.
“I believe the problem is real and that it isn’t an issue with robotics per se. More than anything, it comes down to our values as a society. If we decide that it is important for all people to benefit from gains in productivity, we can do that. If we decide that it’s more important that the benefits of gains in productivity should belong solely to those who create them, we run the risk of creating an insurmountable wedge between rich and poor. The size of this gap will grow over time and the number of people in the top category will shrink over time.
“None of the speakers argued explicitly in favor of stopping robotics research. One idea that was floated, though, was slowing things down a bit to give society a chance to catch up and put the policies in place that will allow us to share the wealth.
“I thought that this issue and several others that were discussed are really important. These topics are most productively addressed by a wide variety of people — policymakers, technologists, humanists, and scholars of many different stripes. From the perspective of getting a diverse group of thinkers together to discuss these topics and find common ground, I feel the symposium was a success — a strong first step.”
Bertram Malle
Department of Cognitive, Linguistic and Psychological Sciences
“Robotics may become a magnifying glass through which all the good and all the bad of human society will become clearly visible — and it will be up to us to decide which prospect we will favor: large benefits to a few at the expense of many, or large benefits to many, at only a very small expense of a few.
“Most, if not all, of our participants considered it extremely unlikely that machines will ever become a threat by surpassing human intelligence. The mere possibility of such a threat justifies careful study and deliberation and full awareness of the consequences of our decisions. But there are a lot more pressing, immediate problems we need to worry about: equal access to evolving technology, detrimental impact on autonomy, privacy, the right to earn a living, and a possible impoverishment of social relations in a world flooded by technology.
“Several people pointed to the importance of robots being transparent — so that people will know when they interact with a robot or a human; so that people will recognize the robot’s limitations; so that people can rely on and trust robots (and not learn to mistrust both robots and humans). This demand for transparency can get tricky when humans all too readily infer emotions and other mental states even in machines that have none of those states. Is it preferable for a robot to appear cold and emotionless so it can be ‘honest’ about its lack of emotions (but then disappoint people because they prefer emotional beings)? Or is it preferable for a robot to give the appearance of emotions, which humans want, but through such appearance be deceptive? One solution would be to actually create internal states in the robot that function very similarly to caring or other emotions and could then be expressed without deception.
“A final thought that a few people mentioned and additional ones discussed informally: protection against ‘superintelligent beings with crazy values’ has to come from the evolution and selection of robots in the context of communities. We can choose to design and ‘teach’ robots community-appropriate norms and values, and it’s therefore our responsibility to do it right. The smaller the domains of application (e.g., taking care of one elderly person in that person’s apartment), the easier it is to design socially adapted robots, whereas universal robots are, for me at least, an unrealistic goal.”