<p>The more robots are able to do, the more likely they are to face decisions that demand a moral perspective. A new grant from the Office of Naval Research supports work at Tufts University, Brown University, and Rensselaer Polytechnic Institute aimed at giving robots “moral competence.” At Brown, Bertram Malle is developing a model of moral competence in humans — no small task, but an essential first step. Malle spoke recently with Kevin Stacey.</p>

Bertram Malle: “If we build robots that interact with humans and that have increasing decision capacity, ... keeping robots amoral would simply be unethical.”
Bertram Malle “If we build robots that interact with humans and that have increasing decision capacity, ... keeping robots amoral would simply be unethical.”
Can a machine have morals?

That’s the question researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute will explore under a new grant awarded by the Office of Naval Research. The Multidisciplinary University Research Initiative (MURI) will explore the challenges of infusing autonomous robots with moral competence — a sense of right and wrong.

That’s no easy task for many reasons. Foremost among them is the fact that scientists have yet to come to a consensus on what constitutes morality in humans. Bertram Malle, professor of cognitive, linguistic, and psychological sciences at Brown and a co-principal investigator on the project, will work toward solving that problem. Through theoretical and experimental research, Malle’s lab will help to isolate the essential elements of moral competence in humans. From there the researchers will develop a framework for modeling those elements, and ultimately a computational architecture to instill moral competence in robotic systems.

Matthias Scheutz, director of the Human-Robot Interaction Laboratory at Tufts, is the project’s principal investigator. Co-PIs are Selmer Bringsjord, a professor at RPI, and Malle, a co-leader of Brown’s Humanity Centered Robotics Initiative. Malle spoke with Kevin Stacey about the new project and his role in it.

What does it mean for a robot to have “moral competence?”

Normally people think of moral competence as “doing the right thing.” But there are several additional capacities that a morally competent human needs to have, and at least some of these a morally competent robot would have as well:

  • knowledge of a system of norms appropriate for the community one resides in;
  • the ability to guide one’s behavior in light of these norms;
  • the ability to perceive and evaluate other people’s behavior when it violates those norms;
  • a “vocabulary” that allows one to communicate about one’s own and others’ norm-violating behaviors — such as to justify a behavior, or when appropriate, apologize for it.

Can you describe a situation in which a moral robot might be used?

Even just partly autonomous robots will quickly get into situations that require moral considerations. For example, which faintly crying voice from the earthquake rubble should the rescue robot follow: the child’s or the older adult’s? What should a medical robot do when a cancer patient begs for more morphine but the supervisory doctor is not reachable to approve the request? Should a self-driving car prevent its owner from taking over manual driving when the owner is drunk but needs to get his seizuring child to the hospital?

As a professor of cognitive, linguistic, and psychological sciences, what's your role in this project?

In our research lab we study first how moral competence operates in humans. What is the core system of moral vocabulary? How are norms acquired and represented? How does moral communication build on moral judgment and decision-making? Then we ask, in collaboration with computer scientists, how these mechanisms could be implemented in computational architectures. Finally, when the computer scientists and roboticists have developed a prototype of a morally competent robot, we subject it to rigorous experimental tests in interactions with humans: Do people find the robot’s judgments trustworthy? Do they find its explanations convincing? Would they be willing to put their loved ones in the robot’s care?

What kinds of studies do you anticipate will get at these questions?

In the work on moral vocabulary, for example, we have mined multiple text sources (from thesaurus to systems of basic English to everyday conversation) to build an initial moral vocabulary. Now we are working on recovering its hierarchical structure so as to represent it efficiently in a computational system. In the work on moral judgment, we are beginning to formalize a cognitive theory of blame that we have developed so as to make it amenable to computational implementation. And recently we have started to take this cognitive theory of blame into the social domain — asking when, how, and for what purposes people express moral criticism (e.g., blaming your friend for stealing a shirt from the department store).

At the end of this project, do you expect to have an algorithm that simulates human moral reasoning?

I don’t want to call it an “algorithm,” because it suggests there is going to be some mathematical formula that captures it all. It’s almost certainly going to be a distributed network of processes that integrate perception, norm representation, learning, decision-making, and communication. It’s a grave mistake to think of human moral competence as one thing — a “moral module” or “moral area” of the brain. It would be just as grave of a mistake to hope to build a single moral module in a robot. Things are beautifully complex when we deal with the human mind; we should expect no less of the robot mind.

What would you say to people who might be a bit unnerved by the idea of moral robots?

Consider the alternative: A robot that takes care of your ailing mother and has no idea about basic norms of politeness, respect, autonomy, and has no capacity to make a difficult decision — such as in my example of dispensing urgently needed pain medication even though the doctor in charge is not reachable. “Morality” has long been considered unique to humans. But if we build robots that interact with humans and that have increasing decision capacity, impact, and duties of care, there is no alternative to creating “moral” robots. Keeping robots amoral would simply be unethical.