Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Value sensitive design helps CIOs tackle social, ethical tech issues

Don't do what Google did.

Not advice CIOs hear every day, but they'd be wise to pay attention to the missteps the search giant made when rolling out the beta test for Google Glass. That's according to Mary "Missy" Cummings, director of the Humans and Autonomy Lab at Duke University and a former military pilot. The problem was not with the technology itself, said Cummings, but rather with the use of the technology in society and the implications for personal privacy.

In this SearchCIO video interview from the MIT Sloan CIO Symposium, Cummings explains why the social and ethical implications of technology should be part of the CIO agenda and how an area of research called value sensitive design could help.

One of your areas of expertise is the social and ethical impacts of technology. Do CIOs need to be cognizant of how smart machines or artificial intelligence may have an impact on ethics in the business?

Mary "Missy" Cummings: Absolutely. Certainly at any management level, everyone at least needs to be aware of the social and ethical implications of any technology they're dealing with.

A great case study that's evolving right now is Google Glass. So, Google Glass has some great applications in my world, like for maintenance of aircraft, maintenance around cars -- anytime you might need to call up a set of procedures or something to be able to see in your peripheral vision while you're working on a task. However, Google took this technology and started running with it, thinking it was just a very cool toy, or some new computation device that everyone would love.

And I genuinely think Google was shocked to find out that there were so many issues [with the technology]. People simply refused to not only wear the device, but I've been to many places where there are signs on the door that say people are not allowed in the venue if they're wearing Google Glass because it violates people's privacy.

This was something that, for Google, just came out of nowhere. But there is research going on in areas of design, for example, value sensitive design, where, theoretically, when you come up with a new design, you'd sit down and come up with all of the peripheral issues -- the social and ethical implications that your technology may touch on -- so that you're at least aware of this. Had Google had this on the radar early on, it would have put them in a better position.

I've never heard that term 'value sensitive design.' Where do you get started with a process like that?

Cummings: As a CIO, it's critical that you take this into account in the early design stages. Even if you've already got a product out there that's already been designed, just get it in there as soon as possible. Better that you start addressing these issues early on before they come at you through litigation or through bad press.

Can you code ethics into a system? Is that possible?

Cummings: There are some researchers who believe you can code a computer to be more ethical than a human -- that a human relies on emotional responses and, under stress, we can make wrong decisions. There are others who argue back that's what it means to be human -- that you make decisions based on your humanness and not necessarily through some cold rationality. I fall into the camp sort of in between [the two], but I lean more toward not coding ethics because we cannot know all external circumstances in the time that we code some device. For example, a killer robot: We cannot possibly code in every single set of conditions that a killer robot might face. And so, unless you can guarantee certainty of outcomes, I am not a fan of coding in ethical decisions.

Does that mean you would never remove the human from a decision-making system?

Cummings: It depends on the decision-making system. I think certainly for the military, where there is a critical loss of life potential, or surgery -- people need to be in the loop for those kinds of decisions. Now, there are other kinds of decisions -- routine decisions at my gas pump, for example -- no, you don't need a human in the loop. But when we're talking about decisions that affect one or more lives or could have a very disastrous outcome if the decision was wrong, this is where you need human oversight.

Let us know what you think of the story; email Nicole Laskowski, senior news writer, or find her on Twitter @TT_Nicole.

View All Videos

Essential Guide

2015 MIT Sloan CIO Symposium guide: Digital disruption

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

We do discuss these amongst ourselves in our engineering team, but not as a fundamental business process or standardized methodology. For the most part, we talk about how to make our tools integrate together the most effective way possible while minimizing security implications. Outside of those areas, fortunately, our product isn't one that would put others in jeopardy or compromise ethical behavior. Any technology could be used that way, of course.
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close