Ethicist sketches 4 models of ‘ethics by design’ for AI decision-making, calling for education and collaboration before programming

As we progress further into a world with decision-making machines, pioneering AI ethicist Dr. Paul Root Wolpe urged XPONENTIAL attendees in his keynote Wednesday not to let ethics be an afterthought. 

“It has to be ethics by design,” said Wolpe, director of  Emory University’s Center for Ethics.

Wolpe, whose career has included 17 years as NASA’s first senior bioethicist, emphasized that humans have a lot of collaborative decisions to make about the values they want machines to consider and how those values should be weighted when they inevitably come into conflict.

He then offered four models, each assigning progressively more decision-making autonomy to machines, to get the conversation started.   

‘Right vs right’

Wolpe explained that humans look at ethical choices through an incredibly complex lens that layers personal ideals – such as honesty, compassion, individuality and solidarity – over cultural priorities, relational context, and irrational quirks.

The process varies from person to person, community to community, nation to nation, and it could vary greatly from manufacturer to manufacturer, machine to machine. Recognizing the variation and its underpinnings is important because it underscores the range of choices society must make when it comes to AI.

“What’s the point of all this? It’s that ethical decisions are not about right vs. wrong,” Wolpe said. “Ninety-nine percent of the time, they’re about two rights in conflict, two values that cannot both be honored, that cannot both be satisfied. A right vs. a right.”

Wolpe also emphasized that machines can never be truly autonomous or accountable, in the philosophical sense, because they lack free will. The parameters and purposes of their choices are set by humans.

“We can’t hold AI morally culpable for its ethical decisions,” he said. “We have to hold some human agency behind that AI (accountable), … whether it’s the programmers, the inventers, the owners, whatever it might be.”

Machines can’t be morally accountable, but they can shape morality – as key components of “sociotechnical systems,” Wolpe said. For example, the plow brought about the evolution of farming communities, and the airplane dramatically increased human mobility.

“The technologies we create,” he said, “reciprocally imbue us with moral dilemmas.” 

Four sketches

Wolpe didn’t recommend a particular set of parameters for ethical AI, but he offered four models to consider.

  1. Ethically naïve AI. This model would keep AI in a category similar to service animals – programmed to handle specific situations in a predetermined way.

  2. Permission-seeking AI. This model would direct AI to make relatively basic, simple ethical decisions and to alert a human supervisor if more complex problems arise.

  3. Ethically programmed AI. This model would call for humans to program machines with ethical algorithms that would guide decisions.

  4. Case-based AI. This model would enable AI to examine millions of decisions and teach itself how to respond, in much the way that language is learned by listening. 

Each model would have pros and cons and shouldn’t be considered a solution. The answer will come through proactive collaboration, Wolpe said.

“First of all, we need to negotiate these standards together. Second, we have to do the hard work of ethical preparation before programming. We can’t try to retrospectively fix the ethical dilemmas, and that takes some real insight into the implications of decision-making from an ethical perspective.”

  • Industry News

AdvocacyAdvocacy ResourceTrusted Resource