Although 2017 has proved to be the year of artificial intelligence, the path to implementing AI systems in the enterprise isn't devoid of challenges, according to Ruchir Puri, chief architect at IBM Watson and an IBM Fellow. Puri spoke with SearchCIO at the recent Platform Strategy Summit hosted by the MIT Initiative on the Digital Economy. In this video, he explains the top three challenges associated with implementing AI systems and delineates why it is imperative to train AI algorithms to answer "why" questions and not just the "what" ones. He also explains why AI systems should be trained to learn from smaller sets of data and emphasizes the importance of keeping humans in the loop as we move to an AI-powered world.
Read excerpts of the interview below, or click on the player to hear the interview in its entirety.
What are the top three challenges associated with implementing AI systems in the enterprise?
Ruchir Puri: AI has made a lot of progress in the last five years, especially with respect to deep learning at the platform level. But as we move forward, the first biggest challenge facing AI I will illustrate with an example: Let's say me and my wife decided to buy a house and we stumble upon a house that we love. I'll then go to a banker for mortgage application. Two weeks later the banker calls me and tells me, "Mr. Puri, your mortgage has been denied." What will be my first response? "Why?" And his response back to me, which the machine will give back to him, will be, "Because the machine learning vector lambda was less than 0.5."
Now, it's funny because my immediate reaction will be, "Why?", and this is where machine learning and deep learning techniques are lacking, because they tell you "that's a cat, that's a dog, that's a chair," they don't tell you the "why." Interestingly, it may be okay for me as a consumer but, enterprises, on the other hand, absolutely want to know "why".
With any action taken in enterprise, it is important to know what is backing up that action because if something goes wrong, they have to trace it back to who made that decision. This is one of the biggest challenges we face as AI solution providers and as a community as we move forward, which is interpretability of the solutions and explainability of AI. This is what I will call the "why" engine. You've got to have "why" engines in the enterprise. Otherwise, the penetration will be only so much because enterprises do not tolerate solutions which don't tell them "why."
Secondly, enterprises have infinite amount of data, but they don't have enough label data to actually guide and train the AI. For example, things like ImageNet take 750 images per classification to really learn that classification: 750 pictures of a cat to say it's a cat. When I was teaching my daughter that's a cat, at best I had two books. If ever I showed her 750 images of cats, until this day she wouldn't be able to recognize a cat because it would have confused her.
We need to progress to a point where we are not just brute forcing the problem and dealing with massive amount of training data, but how do we learn from a smaller amount of data, because as humans we certainly learn from a smaller amount of data.
The third one, I would say, is keeping humans in the loop, which is incredibly important for enterprises.