News Stay informed about the latest enterprise technology news and product updates.

IBM's Rossi on AI ethics: 'Start small and widen the scope'

Anyone who has been following the rapid development of artificial intelligence systems by the military-industrial-academic complex would probably agree that coming up with a code of AI ethics is a prudent move by us humans. But who gets to decide what constitutes ethical AI, and how do those tenets actually get embedded in AI systems?

“I prefer to start small and widen the scope,” said Francesca Rossi, a professor at the University of Padova in Italy and research scientist at the IBM Thomas J. Watson Research Center.

Rossi was among the technology leaders who met recently in Asilomar, Calif., to hammer out guidelines for the development of safe AI systems that are of general benefit to humankind. The work actually began two years prior when scientists and thinkers from other disciplines gathered in Puerto Rico to take on the topic of AI ethics, she explained in a phone call after the Asilomar event.

“After two years we thought we were ready to spell out a little bit more what it means to develop AI in the most beneficial way,” Rossi said.

The upshot was the Asilomar AI Principles, 18 tenets in all and focused on three areas: AI research issues, AI ethics and values, and long-term issues pertaining to the impact of advanced AI systems on the history of life on Earth.

AI ethics, one scenario at a time

Of particular interest to Rossi is Principle 10: AI systems should be designed and operated so that their goals and behaviors can be assured to align with human values throughout their operation.

Living up to that laudable principle is not without its own ethical challenges, she conceded.

“There is no one set of ethical ideas for everybody, of course. Behaving according to some ethical principles may mean many things depending on the culture, professional codes, social norms,” Rossi said.

The way forward for AI ethics, she believes, is grounded in specific scenarios, where the human values that we want to see embedded in machines are well understood.

An example, she said, is the work IBM and other companies, together with domain experts, are doing on decision-making support systems in healthcare. The approach is to go from medical specialty to medical specialty, scenario by scenario. “We understand what we expect from that doctor, and we expect the AI system to behave at least as ethically as a doctor, if not better,” Rossi said.

“Once you understand what values you have to put in, in those specific scenarios, the second thing then is you have to figure out how to build the AI systems so they support value alignment. And then you have to make sure that that system behaves according to those values,” she said.  “There is a lot of research and work that has and is being done by scientists who understand how to do that,” including work by Rossi.

rossi

‘Real human moral judgment uses the whole brain’

In “Embedding Ethical Principles in collective Decision Support Systems,” a paper published last year in the proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Rossi and colleagues make it clear that building safety constraints, ethical principles and moral values in AI systems is far from settled science.

Rossi et al., lay out three approaches — each corresponding to three major schools of Western moral thought — that have served as “the starting point for nearly all discussions of machine ethics”:

  • Deontological (associated with Immanuel Kant). This approach regards morality as a system of rights and duties; it focuses on categories of actions — deeming them “permissible, impermissible or obligatory based on a set of explicit rules.”
  • Consequentialist (associated with Jeremy Bentham and John Stuart Mill). This approach aims to “produce the best aggregate consequences (minimizing costs and maximizing benefits) according to a pre-specified value function.”
  • Virtue-or character-based approach (associated with Aristotle). This approach “regards ethical behavior as the product of an acquired set of behavioral dispositions.” These dispositions, the authors state, cannot be adequately summarized as either “a set of deontological rules” or as a “commitment to maximizing good consequences.”

Indeed, each of the approaches has well-known limitations, the authors note. Deontological principles are easy to implement “but may be rigid.” Consequentialist principles call for complex calculations “that may be faulty.” Virtue is opaque and entails extensive training “with an unknown teaching criterion.”

But there’s a bigger problem with all three approaches, namely: “… implementing them may depend on solving daunting, general computation problems that have not been solved and may not be solved for some time.”

The authors illustrate just how daunting the task is by analyzing what it would take to program a machine to understand the moral precepts of don’t lie and don’t kill. Inculcating moral judgment in a machine, they argue, will likely require a hierarchical decision system implemented within an agent or perhaps across agents that incorporates all three approaches. “Real human moral judgment uses the whole brain,” the authors state.

My opinion? Without knowing what the whole brain actually does — and we don’t —  it will be very difficult to embed AI ethics in a machine, even if we know what those ethics should be.

The paper, by Rossi, Joshua Greene of Harvard University, John Tasioulas of King’s College London, Kristen Brent Venable of Tulane University and Brian Williams of MIT, can be found here.

Email Linda Tucci, senior executive editor, or find her on Twitter @ltucci.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

Close