Get started Bring yourself up to speed with our introductory content.

Driverless cars not a 'solved problem,' says MIT professor

The autonomous-car guys at Google have got to love the 2015 cult comedy sequel Hot Tub Time Machine 2. Not because they thought it was a good film — I think they’re probably too smart for that — but because in the sequel, the future is a bright one for driverless cars. In director Steve Pink’s 2025 — the year most of the cast from Hot Tub Time Machine get transported to — cars take directions from people and cart them about, expertly ride through highway curves and stop on a dime. (They also try to run down humans who are rude to them, which may scare people who yell at their laptops.)

As for the real 2025, a lot can happen in nine years — and a lot will have to happen before cars can motor their way out of human control, according to MIT’s John Leonard.

The mechanical engineering professor and artificial intelligence researcher told an audience at the recent Brain + Machines symposium at Harvard University that Tesla Motors founder Elon Musk’s declaration of self-driving cars as “a solved problem” was more than a little optimistic.

“Just very respectfully, I disagree,” Leonard said. “And I think that driving exposes fundamental issues in intelligence, fundamental issues in how the brain works. And we might be a very long way away.”

Though he thinks Google Car is “an amazing project that might one day transform mobility,” the technology today is overhyped and misunderstood.

Bumpy road ahead

Leonard studies simultaneous localization and mapping, or SLAM, a technique for building maps that vehicles use to direct themselves. He led the team behind MIT’s autonomous car in the 2007 DARPA Challenge, a competition for driverless cars held at a former Air Force base in California.

A video taken at the event of a collision between cars from MIT and Cornell University illustrates his point. MIT’s car tried to pass Cornell’s and hit it instead. The problem, Leonard said, was one of spatial reasoning. The computers running the cars, aided by algorithms, were missing the “semantic understanding of the world” that people have.

Leonard discussed other unsolved challenges facing autonomous driving. One involves the human interaction that often enables car travel. He showed a video taken by a camera fixed to his car’s dashboard as he drove in his hometown of Newton, Mass., a Boston suburb. At a busy intersection with no traffic light, making a left turn would have been nearly impossible without waving at an oncoming car to signal his intention.

In another video, a police officer is standing in the street waving people through a red light at an intersection.

“So if anyone here is a programmer, how do you write the code that says, ‘Always stop at red lights, unless there’s a man on the side of the road waving at you’?” Leonard said.

Unexpected changes to things like road surfaces can also throw off automated cars. Google cars, for example, use precise maps that tell them where they are at any given point on a journey. But if Mother Nature drops a foot of snow, or if a road gets repaved, a driverless car may easily get confused, Leonard said.

Then there’s what’s called the “handoff problem.” When a car can’t figure out what to do on the road, how does it get a human to suddenly pay attention and take the wheel?

“Humans are actually pretty bad at that,” Leonard said.

Until cars can be 100% autonomous — which Google is pursuing — Leonard advocates what he calls a “guardian angel system.” In it, a human has to pay attention the entire trip; auto-driving kicks in only when he makes a mistake or when an accident looks likely.

Thinking about thinking

Leonard is looking at neuroscience to help solve the problems driverless cars face — specifically at how the brain of a person or an animal gathers information and creates an inner vision, or “representation,” of where it is in physical space. He is experimenting with visualized maps that can help a robot negotiate its way through a room or the MIT campus — without crashing into a chair or colliding with a moving object, like a car.

“My dream is to achieve persistent autonomy of lifelong map learning. Say, a robot car that, as it drove around Boston, it would get better and better,” Leonard said. “It would learn about the world.”

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

It seems to me that Leonard is focusing on perfection, and thereby missing a key point: The driverless systems are FAR better than human drivers at the 99.99% of the rest of driving, and so from a the perspective of saving lives, Elon Musk was right - it IS a solved problem. The key of course is to make sure that the driverless car identifies when a situation's parameters are falling outside of the scope of its scope and holds until it can get instructions from either the car's passenger OR even just connect to an internet based "human driver on demand" who could remotely resolve the issue!
Cancel
Having been hit by bad driver numerous times in the past I'm all for the new technology. No more idiots running a traffic light, drifting out of their lane, driving on the wrong side of the road, too fast for road conditions. We just need to make sure there are enough safeguards that we can handle the rare exception to the rules set in place. Things like when a stretch of road has an unexpected speed reduction. Where would this data come from? Who maintains the infrastructure? What are the costs?
Cancel

-ADS BY GOOGLE

SearchCompliance

SearchHealthIT

SearchCloudComputing

SearchMobileComputing

SearchDataCenter

Close