Manage Learn to apply best practices and optimize your operations.

AI systems: The top three business implementation challenges

Although 2017 has proved to be the year of artificial intelligence, the path to implementing AI systems in the enterprise isn't devoid of challenges, according to Ruchir Puri, chief architect at IBM Watson and an IBM Fellow. Puri spoke with SearchCIO at the recent Platform Strategy Summit hosted by the MIT Initiative on the Digital Economy. In this video, he explains the top three challenges associated with implementing AI systems and delineates why it is imperative to train AI algorithms to answer "why" questions and not just the "what" ones. He also explains why AI systems should be trained to learn from smaller sets of data and emphasizes the importance of keeping humans in the loop as we move to an AI-powered world.

Read excerpts of the interview below, or click on the player to hear the interview in its entirety.

What are the top three challenges associated with implementing AI systems in the enterprise?

You've got to have a 'why' engine in the enterprise. Otherwise, the penetration will be only so much, because enterprises do not tolerate solutions which don't tell them the 'why.'

Ruchir Puri: AI has made a lot of progress in the last five years, especially with respect to deep learning at the platform level. But as we move forward, the first biggest challenge facing AI I will illustrate with an example: Let's say me and my wife decided to buy a house and we stumble upon a house that we love. I'll then go to a banker for mortgage application. Two weeks later the banker calls me and tells me, "Mr. Puri, your mortgage has been denied." What will be my first response? "Why?" And his response back to me, which the machine will give back to him, will be, "Because the machine learning vector lambda was less than 0.5."

Now, it's funny because my immediate reaction will be, "Why?", and this is where machine learning and deep learning techniques are lacking, because they tell you "that's a cat, that's a dog, that's a chair," they don't tell you the "why." Interestingly, it may be okay for me as a consumer but, enterprises, on the other hand, absolutely want to know "why".

With any action taken in enterprise, it is important to know what is backing up that action because if something goes wrong, they have to trace it back to who made that decision. This is one of the biggest challenges we face as AI solution providers and as a community as we move forward, which is interpretability of the solutions and explainability of AI. This is what I will call the "why" engine. You've got to have "why" engines in the enterprise. Otherwise, the penetration will be only so much because enterprises do not tolerate solutions which don't tell them "why."

Secondly, enterprises have infinite amount of data, but they don't have enough label data to actually guide and train the AI. For example, things like ImageNet take 750 images per classification to really learn that classification: 750 pictures of a cat to say it's a cat. When I was teaching my daughter that's a cat, at best I had two books. If ever I showed her 750 images of cats, until this day she wouldn't be able to recognize a cat because it would have confused her.

We need to progress to a point where we are not just brute forcing the problem and dealing with massive amount of training data, but how do we learn from a smaller amount of data, because as humans we certainly learn from a smaller amount of data.

The third one, I would say, is keeping humans in the loop, which is incredibly important for enterprises. 

View All Videos

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

What are some of the challenges that you have encountered when implementing AI systems in your organization?
Bias in algorithms is the biggest challenge in AI, especially confirmation bias.  Bias can play a different role in a small population than in a big data population. 

If an AI conclusion is challenged, as it often should be, there are two places to look: The data and he algorithms.  But who selects the data that is included and the data that is excluded?  Is it not the algorithms that decide what data the algorithms will use?

P&C Insurance has a long history of actuarial algorithms; and actuarial success and errors that can provide many lessons.   Whether in or outside AI, those algorithms need transparency so they can be checked.  One wonders if the lack of transparency in algorithms is really to to proprietary secrets or due to the owners lacking confidence in the quality of their own algorithms.

Cathy O'neil has a very good book on that topic "Weapons of MATH destruction" that I love. I agree bias is a very big problem given Data is what drives AI, and inherent biases in data collection and even sampling.

This year we will implement a system IA in an industry of manufacturing output and the inconvenience that is presented to us is that the employer request to view (before an occurrence of an item disruptive in the production sequence), because of the change of decision in 'su' language and not in technical language. This undoubtedly will delay the implementation of such a system, which had passed the stage of simulation and learning.