News Stay informed about the latest enterprise technology news and product updates.

AI models could help companies overcome human bias

Human bias can be coded into AI models, but can AI models help remove human bias? Two companies make the case.

Machine learning algorithms can reinforce human bias, but representatives from HireVue Inc. and Kantar Millward Brown recently argued that they may also be able to remove our biases from processes.

HireVue CTO Loren Larsen said the AI models developed by the on-demand video interview platform company not only make the search for strong candidates more efficient, they also make it fairer for those applying.

The technology enables companies to scale the search process, meaning they can "take more chances and just let someone take [an interview] slot," Larsen said at the recent Emotion AI Summit in Boston. By adding machine learning, HireVue is hoping to take things a step further and reduce human bias in the hiring process.

Take the example of how a candidate's looks affect the job search. A HireVue data scientist developed an AI model to determine how much attractiveness might factor into hiring decisions. The model was trained on a public database of images and then was used to score attractiveness on a scale from one to 10.

"It turns out that if you got a seven or higher, you're twice as likely to get hired than if you were a three," Larsen said. That figure might be palatable if attractiveness equated to job performance, but, HireVue's study couldn't find a correlation between the two.

To that end, HireVue has striven to build AI models that can predict a job applicant's potential performance -- without a human in the loop. The models look for "traditional competencies," according to Larsen, such as a candidate's emotional awareness; negotiation skills; ability to collaborate, work with a team and learn.

HireVue's AI models not only consider what's being said by job candidates, but how it's being said. They're trained to factor in facial expressions and emotion -- technology that's powered by Affectiva, a software company spun out of the MIT Media Lab as well as the conference host.

AI models in advertising

At Kantar Millward Brown, a market research company based out of London, Affectiva's software is helping make the case for more inclusive commercials. The company specializes in "advertising development work." It helps clients understand how their ads are likely to be received by viewers and then finds ways to make them better.

"Some of that is done in what this audience may think of as a relatively old-school way: We show people the ads and ask them questions," said Graham Page, executive vice president and head of global research solutions, at the summit.

Some of the work is done in a decidedly modern way. The firm films participants in a focus group as they watch an advertisement, and then it analyzes facial expressions and other  physiological data using Affectiva's software "to understand the emotional response to the ad as it plays and what the key moments are that really resonated with people," Page said.

For example, an analysis of advertisements done for Unilever, one of Kantar Millward Brown's biggest clients, found that the ads categorized as "more progressive," or more diverse, were 25% more effective than advertisements categorized as "less progressive," or more stereotypical. And ads categorized as the least progressive were twice as likely to achieve the lowest scores on effectiveness, according to Page.

He described this study and others that have shown similar findings as "instructive" in that they help build a case for other businesses that "things like progressive advertisements are not only ethically the right thing to do, they're also good for business," he said.

'IT departments suck'

IT's reputation is still dubious, at least according to the VC panelists at the conference. When the moderator asked what advice the VCs could provide startups on how to sell to corporations, Krishna Gupta from Romulus Capital didn't mince words: "IT departments suck." He described integration as a rate limiter for many companies.

Janet Bannister, partner at Real Ventures in Montreal, suggested startups fret less about selling against other startups and more about selling against incumbents. She said large companies might understand that a startup can solve a problem better than the technology they're currently using, but see the startup's future as uncertain. "Having a strong use case, other customers using the product and great investors that will speak on the company's behalf" may help assuage a large company's concerns, she said.

Say what?!?

"Humans are unique. We're awesome. Let's get beyond that point and look at the attributes that we need in an artificial intelligence system that would enable us to trust it with more and more functionality. I think it's a continuum. Just like ethics is a continuum. Morality is a continuum. ... And I think we need to invite our machines into that continuum, that struggle, that wrestle that we're in." -- Babak Hodjat, founder and chief scientist, Sentient Technology

"It's kind of a tough time to think about how we encourage people to trust AI. And that's particularly true given that some of the biggest businesses that use AI, particularly in the social sharing space, are at the absolute center of a massive crisis of trust." -- Graham Page, executive vice president and head of global research solutions, Kantar Millward Brown

Dig Deeper on Enterprise artificial intelligence (AI)

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What processes at your company could be improved by AI models built to reduce human bias?
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close