News Stay informed about the latest enterprise technology news and product updates.

Beware of unwitting bias in AI algorithms, experts say

Companies need to know how bias can unwittingly creep into recommendations from AI algorithms, according to panelists at the recent AI and the Future of Work event at MIT.

Sophie Vandebroek, vice president of emerging technology partnerships at IBM, suggested companies start with scrutinizing their training data. “AI algorithms are only as good as the data used to train the system,” she said.

For example, an AI algorithm trained on data from a company that has more men than women software engineers might conclude that men are better software engineers than women, “which of course we know has nothing to do with the job and is irrelevant to the hiring decision,” Vandebroek said.

Checking the training data for bias is necessary but not sufficient for ridding AI algorithms of bias, said Gabi Zijderveld, chief marketing officer and head of product strategy at Affectiva.

Bias can also be subconsciously coded into an algorithm by developers. “We build what we know,” Zijderveld said. She recommended that companies strive for diversity when putting together their teams, as they can collectively act as an anti-bias failsafe. Besides, she said, diversity also “fosters creativity and innovation.”

CIOs can also help set the tone by investing in a strong culture, according to Nichole Jordan, managing partner of markets, clients and industry at advisory and accounting firm Grant Thornton LLP. “You’re bringing together a lot of individuals with different backgrounds — behavioral scientists, data scientists, sociologists — to work together,” she said. “You’ve got to be clear on your culture.” It’s important that a company’s ethics, values and acceptable behaviors are spelled out to employees.

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close