peshkova - Fotolia
Marc Vael wonders whether the artificial intelligence running in his company is doing what it's intended to do.
"In the back of my mind, I'm always asking, 'Are we sure about what it's saying?'"
So Vael, CISO at graphics arts company Esko, directed his teams to develop and implement safeguards that range from testing procedures to procurement guidelines to prove that the AI is, in fact, delivering valid outcomes.
"We always take that step back to make sure we're confident [in the AI systems]," said Vael, a former board director with IT governance association ISACA.
Vael has teams test the AI systems by inputting fake data and studying the results. The teams also review algorithms to ensure quality, manage risk and reduce the chance of biases being built into the equations. They've adjusted and extended traditional controls to guarantee that the data being fed into the AI systems, as well as the intelligence being produced, has adequate security and privacy protections. Vael also asks vendors about the AI capabilities -- and AI controls -- baked into their products.
Esko's precautions are completely on point, according to experts. As organizations adopt more artificial intelligence technologies, they will need to simultaneously develop strong programs to govern them. An AI governance program should address how AI should be used, the data it uses and how to validate its algorithms and results, as well as how and when to take corrective measures.
"Organizations need this due diligence so the algorithms don't have any unintended consequences," said Ritu Jyoti, program vice president of artificial intelligence strategies with IDC's software market research and advisory practice.
Jyoti and other experts said a good AI governance program prevents biases from being built into and perpetuated by the algorithms. It can also prevent inaccurate outcomes due to the input of faulty data and inappropriate or unethical uses of AI-produced insights -- all of which can harm an organization's economic and reputational value.
Still, Jyoti said research shows some 90% of organizations don't even have an AI strategy, let alone an AI governance program.
That is expected to change, however. In its FutureScape report, "Worldwide CIO Agenda 2019 Predictions," IDC concluded that "by 2022, 65% of enterprises will task CIOs to transform and modernize governance policies to seize the opportunities and confront new risks posed by AI, ML [machine learning], and data privacy and ethics."
AI governance extends beyond the CIO
Jyoti said organizations should not see this as an IT issue and should not assign ownership of AI governance solely to the CIO.
"It should be driven by the business and driven by the business requirements of the company," she said.
Others offered similar opinions, adding that executives should not develop AI governance as a stand-alone program.
"Governance for AI is likely to fit into a broader governance structure for the enterprise, especially with respect to data," said Geoffrey Parker, a professor of engineering at Dartmouth College, where he also serves as director of the master of the engineering management program. Who owns data, how widely it can be shared and what rights data providers have are all questions that apply to AI, he said, but AI also has unique issues that need to be addressed separately.
"Because of the training data used and the assumptions under which AI is deployed, there can be unintended consequences. For example, AI at a major technology firm was discovered to systematically discriminate against women in the hiring process, which was never the intent," said Parker, who is additionally a research fellow at MIT's Initiative on the Digital Economy and co-author of the book Platform Revolution: How Networked Markets Are Transforming the Economy -- and How to Make Them Work for You.
Indeed, in order to address the unique governance issues raised by AI, Parker said the IT industry should develop "standards-based organizations to provide templates for governance principles at the enterprise level." From there, he said each enterprise can adopt and adapt to those industry-level templates to fit their own specific circumstances.
"At a national level, some broad principles that lay out guiding values could be of significant value," he added.
Building trust, AI standards
There is movement on this front. Professional services firm KPMG, for example, recently announced its "AI In Control," framework, with methods, tools and assessments meant to help organizations maximize value from their AI technologies, while also ensuring algorithm integrity, fairness and agility.
"The word here is 'trust.' You need to be able to trust the outcome of the algorithms. But, moreover, your partners, other organizations, [and the end users and customers] need to trust the outcome of your algorithms," said Sander Klous, data and analytics leader with KPMG and co-author of Building Trust in a Smart Society: Managing in a Modular, Agile and Decentralized Way.
Organizations don't need to explain everything behind the AI curtain in their governance program to build that trust, Klous said. In other words, AI governance doesn't have to create full transparency into the algorithms. But it does have to create accountability, authority and approval processes using controls and audits, as well as peer and independent reviews of algorithms.
"It's about having an expert look into it, that you have all the controls in place," Klous said, adding that he defines governance as "a set of well-defined responsibilities to ensure a proper outcome."
Klous said an effective AI governance program should include an examination of the algorithms used within an organization and the algorithms' impact on decisions and outcomes, as well as appropriate controls that vary based on the level of those impacts. Organizations should establish these checks and reviews for both homegrown AI, as well as the AI built into the products it buys from vendors.
C-suite shapes governance as AI activities evolve
Data Conversion Laboratory Inc. (DCL), which organizes, converts and moves content for its client organizations, has adopted a broad array of AI capabilities, including natural language processing, in the past several years, said CIO Tammy Bilitzky. At the same time, DCL "has been evolving its policies and procedures to ensure appropriate governance of all AI activities, directed by a formal governance board that includes senior management leadership."
Bilitzky said, as CIO, she and the company coordinate the AI governance program with participation from the CEO, COO and AI development managers, and in collaboration with the heads of key business units.
The objective, she said, "is to oversee all AI activities in the firm and ensure standard oversight of the AI program as a technology effort, with the added focus on aspects unique to the thriving expansion of AI and its potential impact on other facets within our organization."
DCL's rigorous AI governance program touches on vision and roadmaps, prioritization of activities, resourcing and training, intellectual property, return on investment, accountability, privacy and security, and adaptability.
Challenges remain to strong AI governance
Parker said enterprise leaders will face challenges -- at multiple levels -- as they move to develop and implement AI governance.
"At the initial implementation, there must be a process that first establishes governance," he said. The firm's business strategy must be reflected in the AI governance guidelines, and the guidelines must be there to mitigate AI risks and maximize business benefits.
One big concern is that this implementation will be confined only to IT organizations.
"Once established, the guidelines risk going obsolete as either technology changes or the business conditions change. If AI governance is seen as simply a compliance effort, then the risk of irrelevance increases," he said.
Others agreed, but Jyoti said organizations can also counteract that risk by making AI governance an ongoing exercise.
"We recommend that the policy doesn't just exist," she said, "but is one that's constantly refined and adjusted based on learning."