Onus is on CIOs to address limitations of artificial intelligence

Recognizing the limitations of artificial intelligence is step No. 1 for CIOs aiming to reap its benefits, according to AI luminaries at the recent EmTech Digital conference.

It's easy to get caught up in the hype around how advances in High-performance computing and new techniques for implementing deep neural networks will lead to better artificial intelligence for enterprise use cases. These advances will certainly help. But CIOs also need to keep an eye on other limitations of artificial intelligence and the underlying challenges in implementing AI in the enterprise.

At MIT Technology Review's EmTech Digital conference in San Francisco, experts weighed in on some of these fundamental challenges and some early research for addressing them. The limitations of artificial intelligence technology at present include the large amounts of expensive human labor required to train it, AI's lack of common sense, the limited ability to generalize learning and the different ways biases get baked into algorithms.

Autonomy is an AI limitation

Oren Etzioni, CEO, Allen Institute for Artificial IntelligenceOren Etzioni

Oren Etzioni, CEO at Allen Institute for Artificial Intelligence, acknowledged that engineers have made tremendous advances in artificial intelligence, citing AI's recent triumphs over humans in chess, Jeopardy! and the game of Go. But IT leaders must keep in mind that these advancements were achieved against unambiguous rule sets and game boards that, in the case of chess, are literally black and white. These triumphs don't mean artificial intelligence has surpassed human intelligence.

Etzioni said a good framework for thinking about the differences between human and artificial intelligence skills plots autonomy and intelligence along two different dimensions. Teenagers drinking on a Friday night are autonomous, but not very intelligent. Google's AlphaGo is intelligent, but not very autonomous. A human has to push a button to start a new match and end it, because the system is not even aware that it won.

In an enterprise setting, humans will need to step in for managing processes requiring autonomy. This could be critical when artificial intelligence is asked to respond to unexpected conditions.

High labor cost

AI systems today require a considerable amount of human labor to create categories, organize input data and label this data. For example, creating a better algorithm to identify cats from dogs requires someone to create a taxonomy of terms for different species of cats and dogs. Another team must gather and organize a library of pictures showing different kinds of cats and dogs. Then, other people must label these pictures before the process of machine learning can begin.

"Even today, 99% of machine learning is still human work because of these inputs," Etzioni said. The success of Google's AlphaGo in beating one human was due to the effort of hundreds of highly skilled experts working for years. In the short run, CIOs can help to reduce these bottlenecks by adopting tools to automate some of the human processes.

In the long run, artificial intelligence will require developing new approaches to learn common sense, which is a key aspect of autonomy. For example, a human nurse might think twice when a medical record states a 5-year old's weight as 500 pounds. In contrast, an AI system might mindlessly recommend a prescription based on this weight that could be lethal. 

Etzioni is working on a basic framework for implementing common sense that merges machine learning, natural-language processing and crowdsourcing. This research is still in its early days, and considerable more work will be required to approach the common sense of children.

New techniques required to generalize knowledge

One thing that aims to address the limitations of artificial intelligence would be the development of new approaches for generalizing AI knowledge, said Brenden Lake, assistant professor of psychology and data science at New York University. Current approaches to implementing AI are focused on solving for a particular optimization, but these algorithms are brittle and break easily when the environment changes.

Even today, 99% of machine learning is still human work because of these inputs.
Oren EtzioniCEO, Allen Institute for Artificial Intelligence

For example, a system that learns how to play a computer game to maximize points could not apply this same model to completing the game with the minimum amount of points. Building a more general framework will require finding a way of bringing cognitive science about how people work together with traditional AI research.

Brenden Lake, assistant professor of psychology and data science, NYUBrenden Lake

"In many cases, people can construct more knowledge from less data," Lake said. Humans are good at seeing a pattern of components in something like a Segway scooter, and then envisioning how these might be put together in new ways. Lake has been doing some preliminary research on compositional learning to teach machines to see underlying patterns by understanding components. For example, this approach looks at showing machines how characters are drawn to identify a pattern, rather than the finished image. This allows these systems recognize variations in a set of letters in the same way as humans.

Going forward, generalized learning will also require advances in intuitive physics, intuitive psychology and intrinsic motivation. Intuitive physics relates to how babies understand the basic principles of how the world works just by moving around. Intuitive psychology is the ability to identify what is motivating another person, like recognizing that kids playing ball might carelessly run in front of the car. Intrinsic motivation is the desire to learn just for fun, and not because a human programmed you to solve a problem.

Creating data sheets for algorithms

Timnit Gebru, post-doctoral researcher, Microsoft ResearchTimnit Gebru

CIOs also need to think through some of the governance issues around the use of machine learning algorithms for making decisions. Timnit Gebru, a Microsoft researcher, said that bias can be accidentally woven into algorithms used for decisions like hiring, loan acceptance and even criminal sentencing guidelines. Various laws mandate against using race, gender and other factors in making these kinds of decisions. But bias can creep in even when this kind of data is not explicitly considered.

For example, many natural language processing techniques are built up from fundamental rule sets that might say something like "doctor is to man" as "nurse is to woman." More sophisticated systems might correlate race or gender with attending a particular school that has a high proportion of whites, African Americans, or women, which could indirectly bias the algorithms.

Another of the limitations of artificial intelligence relates to the data sets used for training AI systems. For example, data sets used to train machine vision systems to recognize gender might include more lighter-skinned people. The resulting algorithms would then be deficient in classifying the gender of darker-skinned people.   

"My worry is that, as AI researchers, we tend to sit in a corner, work on a data set and call it a day," Gebru said. "We cannot ignore the social and structural problems that exist."

One thing that might help is development of technical data sheets for describing the characteristics of data sets and algorithms built on them. This kind of detailed information was important for electrical engineers in understanding the limitation of their component choices on various real-world scenarios. These data sheets for data sets would similarly describe how a particular data set or algorithm could lead to bias or inaccuracy.

Dig Deeper on Digital transformation

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close