News Stay informed about the latest enterprise technology news and product updates.

Deep neural networks, speed limit signs and training on the fly

Because of data and compute power requirements, training a deep learning algorithm doesn’t typically happen on so-called edge devices such as smartphones or drones. But Neurala Inc., a deep learning startup based in Boston, is looking to change that with its lifelong deep neural networks or LDNNs.

The fledging firm claims its deep neural networks get around the traditional data and compute constraints by adding to the data set at the edge, thus enabling mobile devices like drones to learn literally on the fly. Its initial project was funded by NASA and focused on space exploration, but when I mentioned the Neurala to Forrester Research analyst Mike Gualtieri, he put his finger on a more terrestrial use case for its brand of deep neural networks.

“Imagine a self-driving car,” he said. “And imagine that one of its models is used to recognize speed limit signs.” And in fact, you don’t have to imagine too hard because a German auto manufacturer did just that, Gualtieri said.

It trained a model to interpret speed limit signs by feeding it lots of examples of speed limit signs, and it tested its self-driving car on the Autobahn — where it encountered a curious problem: The self-driving car was slowing down automatically for reasons that weren’t obvious.

“They finally figured it out,” Gualtieri said. “It was because sometimes the cameras on the self-driving car saw what it thought was a speed limit sign but was actually a decal on the back of a truck that said, ‘I don’t go more than 60 kilometers per hour.'”

The training of the model wasn’t accurate enough to differentiate between a decal and a speed limit sign. In cases like this, the testing company brings the model back to the lab to be retrained to distinguish between speed limits signs and decals.

“So the question I would have for this company is, alright, given that scenario, how would you train that model?” Gualtieri said. “You’d have to have some sort of user feedback or some other feedback for it to learn.”

So I asked Neurala. “It actually does require a human to be in the loop,” said Heather Ames, COO and co-founder of the Boston-based startup.

Neurala’s lifelong deep neural networks don’t self-correct autonomously. In other words, the learning that takes place on the edge is supervised learning. A human helps decid

< class="wp-caption-text">Heather Ames

e what the lifelong deep neural networks should learn and helps to correct a system that isn’t performing accurately.

“The human operator would have some sort of user interface to modify learning,” Ames said. “So to correct it, either through reinforcement: ‘Yes, we want to slow down when we see this sign.’ Or: ‘No, we don’t want to slow down when we see this sign. When we see the decal sign, we want to just follow at a safe distance rather than slow down.'”

In the case of a driverless car with a driver behind the wheel, she said Neurala would never advise a driver go hand’s free to train the software and create unsafe driving conditions. Indeed, the application to solve this problem would have to involve a lot of user experience design work to figure out the best way for an operator in a car to supervise a deep neural network safely.

But the point is: “We’re a long way from full autonomy, particularly with the systems we build,” she said.

Cloud Computing
Mobile Computing
Data Center
Sustainability and ESG
Close