News Stay informed about the latest enterprise technology news and product updates.

Investigating the moral machine

Edmond Awad and Sohan Dsouza of the MIT Media Lab talk about their latest project and how they're hoping it sparks a conversation about morals and autonomous machines.

This article can also be found in the Premium Editorial Download: CIO Decisions: AR, VR tech poised to revolutionize digital business management:

What should a self-driving vehicle do when faced with a life-or-death scenario? That's loosely the question behind...

a new crowdsourcing platform created by the Scalable Cooperation Group at the MIT Media Lab. Researchers want to get people talking about morality and machine intelligence.

The Moral Machine project does this by asking for visitors to participate in the "judging" process. In each scenario, participants are presented with two options and asked to determine which option is more acceptable. Should, for example, the self-driving car kill a pedestrian crossing the street and save the passenger, or should the car kill the passenger and save the pedestrian? Is there a lesser of two evils? Edmond Awad and Sohan Dsouza, the two research assistants responsible for developing the site, want humans to weigh in.

Awad and Dsouza sat down with SearchCIO recently to talk about the site, the problem with the data they're collecting and the findings they've uncovered since the site launched in June.

What is the Moral Machine?

Edmond AwadEdmond Awad

Edmond Awad: The Moral Machine is a platform to gather the human's perspective on moral decisions being done by machines. Basically, the two goals of this platform are: First, to understand what the public thinks about what machines should do, such as self-driving cars. Self-driving cars are, today, the best example of autonomous machines making autonomous decisions. Second, [the goal is] to promote a discussion about this topic, because it's being neglected by car manufacturers. It's kind of a tricky point for them to talk about, which is, 'How can we implement ethical decisions into machines?'

Sohan Dsouza: While the self-driving car technology has been growing very fast, the ethical questions have not been resolved. And these questions have to be resolved one way or another. That's why we have the site.

The Moral Machine has attracted more than 2 million participants. What kind of information are you collecting?

Awad: Until recently, we were only collecting people's responses to the scenarios.

Dsouza: And we are able to geolocate those.

Awad: Recently, we added a demographic survey at the end of a session, which helps us know more about the demographics of people visiting.

Have you had a chance to analyze the data?

Awad: We haven't yet dug into the details of the data. So far, the data is a bit noisy. We're trying to refine the collection process. Each time we go over it, we realize that we will need more to clean this data. But, the general trend is that we realize some broad cultural differences. For example, we realize that western countries prefer utilitarian decisions more than the eastern countries. We also realize that the eastern countries prefer to save passengers over pedestrians more than western countries.

What is making the data noisy?

Awad: When we built this website, we could have collected more data on people. But what we realized is that it would require more tasks from users. For example, we could ask users to sign up, make an account, and that would help us get clean data from the beginning. But, of course, we thought that would be too much work for the user. So, we wanted to make this easier.

We wanted to make it so that everyone could play this by sparing them the effort of getting into all of the details of answering questions. We now know we have a lot of people coming in, and we could ask [those questions] to those who are interested at the end. That's how we can identify different users.

How long do you plan to keep the platform operating?

Sohan DsouzaSohan Dsouza

Dsouza: We'll continue upgrading it. We already added the survey recently, and we're going to internationalize it to make it multilingual. We'll probably add more features in the future. The platform itself as a concept and the level of abstraction is clearly something that works. So, we plan to make use of that and gather more data.

What are the next steps?

Awad: Trying to refine the data, trying to get clearer data. We're planning to do internationalization, which is translating the website. This is important on many different levels. First of all, we want this to reach more people and not be limited to those who speak English in other countries. Already, we have participants from more than 110 countries, but we know that most of these people are English speakers. So, we want to reach more people.

Another thing, we want to collect data about the participants. So far, our data is kind of biased because those who have answered our questions don't represent the country [they're in]. We want to have answers that represent [individual] countries, which we could get by having people answer in their native language. Third, answering questions in your native language is different than answering in another language. There was already an article about this -- about how people's answers change when they're talking about something in their own language. So, this is a major step for us.

Next Steps

Cognitive robotics will affect the job market

Better get used to working alongside robots

The enterprise will benefit from smart robots in the workplace

Dig Deeper on Enterprise artificial intelligence (AI)

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

The design of experiment is flawed fatally by the high probability that humans will fabricate responses without the influence of a substantial risk/reward and specific context. A human decision would (probably) weigh conditions and consequences, e.g. the passenger is 90 years-old and the pedestrian is a young woman and child. [Also consider the reverse roles.]

A priori, in the short interval (seconds), of the decision cycle (see Boyd's OODA), what information and rules COULD BE KNOWN?
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close