The autonomous-car guys at Google have got to love the 2015 cult comedy sequel Hot Tub Time Machine 2. Not because they thought it was a good film — I think they’re probably too smart for that — but because in the sequel, the future is a bright one for driverless cars. In director Steve Pink’s 2025 — the year most of the cast from Hot Tub Time Machine get transported to — cars take directions from people and cart them about, expertly ride through highway curves and stop on a dime. (They also try to run down humans who are rude to them, which may scare people who yell at their laptops.)
As for the real 2025, a lot can happen in nine years — and a lot will have to happen before cars can motor their way out of human control, according to MIT’s John Leonard.
The mechanical engineering professor and artificial intelligence researcher told an audience at the recent Brain + Machines symposium at Harvard University that Tesla Motors founder Elon Musk’s declaration of self-driving cars as “a solved problem” was more than a little optimistic.
“Just very respectfully, I disagree,” Leonard said. “And I think that driving exposes fundamental issues in intelligence, fundamental issues in how the brain works. And we might be a very long way away.”
Though he thinks Google Car is “an amazing project that might one day transform mobility,” the technology today is overhyped and misunderstood.
Bumpy road ahead
Leonard studies simultaneous localization and mapping, or SLAM, a technique for building maps that vehicles use to direct themselves. He led the team behind MIT’s autonomous car in the 2007 DARPA Challenge, a competition for driverless cars held at a former Air Force base in California.
A video taken at the event of a collision between cars from MIT and Cornell University illustrates his point. MIT’s car tried to pass Cornell’s and hit it instead. The problem, Leonard said, was one of spatial reasoning. The computers running the cars, aided by algorithms, were missing the “semantic understanding of the world” that people have.
Leonard discussed other unsolved challenges facing autonomous driving. One involves the human interaction that often enables car travel. He showed a video taken by a camera fixed to his car’s dashboard as he drove in his hometown of Newton, Mass., a Boston suburb. At a busy intersection with no traffic light, making a left turn would have been nearly impossible without waving at an oncoming car to signal his intention.
In another video, a police officer is standing in the street waving people through a red light at an intersection.
“So if anyone here is a programmer, how do you write the code that says, ‘Always stop at red lights, unless there’s a man on the side of the road waving at you’?” Leonard said.
Unexpected changes to things like road surfaces can also throw off automated cars. Google cars, for example, use precise maps that tell them where they are at any given point on a journey. But if Mother Nature drops a foot of snow, or if a road gets repaved, a driverless car may easily get confused, Leonard said.
Then there’s what’s called the “handoff problem.” When a car can’t figure out what to do on the road, how does it get a human to suddenly pay attention and take the wheel?
“Humans are actually pretty bad at that,” Leonard said.
Until cars can be 100% autonomous — which Google is pursuing — Leonard advocates what he calls a “guardian angel system.” In it, a human has to pay attention the entire trip; auto-driving kicks in only when he makes a mistake or when an accident looks likely.
Thinking about thinking
Leonard is looking at neuroscience to help solve the problems driverless cars face — specifically at how the brain of a person or an animal gathers information and creates an inner vision, or “representation,” of where it is in physical space. He is experimenting with visualized maps that can help a robot negotiate its way through a room or the MIT campus — without crashing into a chair or colliding with a moving object, like a car.
“My dream is to achieve persistent autonomy of lifelong map learning. Say, a robot car that, as it drove around Boston, it would get better and better,” Leonard said. “It would learn about the world.”