Deval Pandya, lead data scientist at Shell, said he believes working with subject-matter experts is among the most important criteria for success in AI. People with in-depth knowledge of the subject matter provide the context and nuances that are hard for deep learning tools to tease apart on their own. But he said there are also big benefits to looking outside of one's field of experts.
Pandya was part of a panel of AI engineers and data scientists that I moderated at the Re•Work Applied AI Summit in San Francisco last month. The group, which also included IT leaders from Uber and supply chain vendor Fellow Robots, shared their experiences applying cutting-edge AI to solve business problems.
Here is a recap of the criteria for success in AI that came up in our discussion.
Data aggregation at Shell's edge
Pandya's team has a big job at Shell: It is tasked with figuring out how to use the data coming from hundreds of thousands of sensors to optimize how Shell operates its plants, produces hydrocarbon, generates electricity and maintains its refineries.
Predictive maintenance, in particular, is a challenging problem, Pandya said, as a typical refinery might have 40,000 valves. His team is working on creating a framework to build, run, maintain, retrain and evaluate -- and, if necessary, retire -- the predictive maintenance models that make sense of sensor data collected on the flow of chemicals through these valves. Shell selected C3.ai's platform to reduce the burden on its data scientists and engineers as they deploy these models at scale.
One challenge lies in aggregating and processing data closer to where the action is happening in order to spare the load on the cloud, reduce latency and reduce data consumption. The energy industry has been grappling with this problem long before the cloud. Pandya said the first project he worked on related to processing seismic images. A 100-square-kilometer block could require 10 TB to 15 TB of data. Pandya estimated that the data Shell has collected from all its wells is probably 10 to 15 times larger than all the YouTube video data combined.
Although Shell is used to processing this kind of data, Pandya's team is working on finding ways of doing more processing at the edge. He estimated that about 30% to 40% of the workloads in the data center could be pushed to the edge. The aim is to parse the data to only the most relevant information for a particular project.
In pursuing this new edge strategy for Shell, Pandya said he stumbled across insights from other industries that helped simplify this development process. For example, he found a paper on fraud detection that described a novel way of detecting anomalies. As it turned out, this was applicable to the advanced analysis his team was doing on equipment for predictive maintenance.
The discovery convinced him it's important to look at other industries that might be trying to solve similar problems.
"In our case, it happened to be anomaly detection. There are various industries that have already done a lot of work in the anomaly detection -- and banking and fraud may be more mature in the way they have been doing it," Pandya said.
Criteria for success in AI involved Nvidia's new Jetson processor
Fellow Robots has been developing a variety of in-store robots and data collection devices to improve inventory checking, provide customer assistance and map out store layouts. Although Fellow does not have as many sensors as Shell, it also needs to find the most efficient ways to collect and analyze the data generated remotely. This includes data used to determine the optimum speed at which the robots should move, a metric that affects image gathering and safety; data on store lighting, a metric that affects image recognition; and data on the kinds of people with which the robots are likely to interact, (which affects user experience design.
"We spent a lot of time planning how to optimize the data acquisition systems so that the algorithms can generalize for different problems using the type of data we are reading," said Jagadish Mahendran, AI engineer at Fellow Robots.
When Fellow first started deploying the robots in stores, Mahendran said his team was sending so much data to the cloud that the processing time slowed. At about that time, Mahendran heard about other companies using Nvidia's new Jetson processor for moving AI processing to edge devices on low-end GPUs.
The team bought a few developer boards, and Mahendran started exploring the limits of the algorithms they could run on them. After a couple of months, he was able to develop a new algorithm that decreased local processing time by about 25 times.
"That's exactly what we were looking for at the time, so it was just beautiful," he said.
Mahendran said one of the most important criteria for success in AI projects like these is having the data scientists work with the experts who really understand how the robots will be used in the business setting -- in particular, the impact robots could have on customers. The collaboration makes it easier to decide what kind of data to collect and the types of algorithms for processing the data that can be pushed to the edge.
Human vetting a requirement for AI success
Shell's Pandya seconded the best practice of data scientists working with the subject-matter experts to figure out what data is needed. Experts at Shell include the geophysicists, as well as the maintenance engineers who help data scientists figure out what signals to prioritize from the sensors that might be at a plant.
Pandya said his team of data scientists, for example, might identify the top 200 machine learning model features created from various combinations of sensor input data. But then laboratory and process engineers sometimes decide the algorithms should focus on another list of things.
"You try and combine both of those approaches together," Pandya said.
And while predictive maintenance algorithms can save money by optimizing repair schedules, their recommendations need to be vetted by humans, Pandya stressed.
"What's really difficult is that the cost of false positives is humongous," he said.
If a credit card company flags a suspicious transaction as a false positive, it might create an inconvenience for a customer. If Shell shuts down a refinery to repair something when it is not required, it can cost millions of dollars in lost productivity.
"But the other alternative of not repairing machinery in a timely manner could be even worse," he added, making it essential that all parties are communicating on when and how to apply AI.
Communicating across teams
Indeed, the people side can be trickier to navigate then the technology in applying AI. Pandya said he has generally found the technology mature enough for starting practical work, but bringing people together requires a lot of emotional intelligence, too.
"How do you bring a refinery equipment engineer, who has been doing this for 30-plus years, around to this idea of adopting this technology?" Pandya asked.
Some of his team's applied AI efforts, for example, are seeing pushback from frontline engineers. At the same time, the company understands that developing better algorithms can fundamentally change the way it operates.
"But we have to be careful about how we go on implementing that," he said.
On the AI side, Pandya said he also has to figure out how to bring in AI talent that not only has machine learning skills, but also some subject-matter experience, as well, so his team can communicate with the subject-matter experts.
Shell's composition of teams typically includes a product owner who works with the machine learning engineers. Some of the different types of subject-matter experts that come from the product teams include process engineers, laboratory engineers, drilling equipment engineers and economists who might be involved in a project to craft better AI for predictive maintenance or optimal operations.
This kind of measured -- and inclusive -- approach has paid off in building enthusiasm across business units across the company, with the result that his team is now getting more unsolicited requests for new projects than pushback.
Uber's new networkers key to AI success
At Uber, AI researchers actually play a role in improving communications across the teams and processes that flow across the company, said Huaixiu Zheng, senior data scientist for applied machine learning at Uber.
"Successful AI people are typically able to communicate with different parties," he said.
This is a big transition from traditional software engineering, where designers or project managers would create a concept, and the software engineers would execute their orders. Now, there is another player that comes in who brings the AI algorithms into the software engineering system and works with all parties as a kind of bridge.
"This is a huge challenge and opportunity for all of us working in AI," Zheng said.
For example, Uber had an unwieldy process for resolving customer complaints. Customers would communicate when they were not happy with the driver, were charged more than expected or lost an item.
"It was all humans making the decisions; it was very tedious [for the customer] and not that efficient for business," Zheng said.
So, the AI development team worked with the customer resolution experts to figure out what types of requests could be solved automatically. This required finding ways to automatically discern what customers were specifically complaining about and what were the key factors for resolving the problem at hand.
AI developers then had work with the software development engineers to automate the results of these new customer resolution algorithms.
"Having these conversations early in the project is super powerful," Zheng said.