Much of the current discussion on the evolution of artificial intelligence — an umbrella term to mean everything from speech recognition software to robotics — centers on speed. Machines can digest, consume and analyze more data at a faster rate than ever before, but does that make a machine intelligent?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
It’s a hard question to answer (even the experts don’t quite agree), and that’s what made Dileep George’s EmTech presentation so interesting. Titled “Artificial Intelligence at Work,” George, who works for the AI research project Vicarious, started his presentation with an observation: Hardware, motors and plastic are cheap, “but we still don’t have a Rosie the Robot.”
Why isn’t a Rosie the Robot, the beloved family housekeeper in the animated series The Jetsons, here to tidy up your dirty dishes and sweep your floors? Because of a software problem — what George (that’s Dileep George, not George Jetson) called “the common sense problem.”
He illustrated what he meant in two sentences: John pounded a nail on the wall; Sally pounded a nail on the floor. If George were to ask in which case was the nail horizontal, EmTech’s attendees could easily answer that question, he said. We instantly imagine the scenario — John, the wall, maybe even a hammer.
“You have a pretty rich model of the world in your head, and I was poking that knowledge using a few sentences. And you ran a rich simulation in your head to answer the question,” he said.
For machines, not so much. They can’t imagine the situation because they lack what George called “a model of the world.” “To build systems that have common sense, you have to build systems that build models of the world,” he said. “In fact, I would say building a model of the world and being able to act on it is the crux of intelligence.”
But here’s where it gets confusing: There are systems that can successfully function in the world without having a model of it. “Our world is filled with creatures that can do this,” he said. Dinosaurs, fish, amphibians, reptiles are all examples. On the other hand, the Roomba, a robot vacuum, can get stuck cleaning one corner of the room because it doesn’t have a model of its surroundings. That’s true even of the DeepMind Technologies’ artificial intelligence that has learned to play Atari. Change the brightness of the screen, and DeepMind’s AI gets tripped up.
These systems are all using what George called “old brain.” For AI, that translates into using a lot of training data and labels to create a desired response. “Once you train the system, and these are humans training the system, they do produce the response given the test data,” he said. But it’s still no rich model.
So, how can researchers help AI evolve from old brain to new brain, an intelligence which can imagine and reason about the world? George said there are two steps to take:
- Take neuroscience and cognitive science seriously. “We need to understand what is happening in the new brain rather than building very specific circuits for very specific applications,” he said.
- Focus on data efficiency and task generality. Rather than throw a lot of training data at machines and test the machine on limited, narrow data sets, George argues that needs to be flipped. “You have to keep the training data small, much like a child would experience, and we have to test it on a huge data set,” he said. “We have to test for robustness.”
“If you execute on those principles, you can [do things like] break captcha,” a test made up of numbers and letters that’s used to ensure the user of the site is human. In fact, George and his colleagues have done just that — with a small amount of training data and a large test set. “It’s an amazing feat of generalization by humans,” he said. “You did not get training data on any of these captchas, and you can solve these things. A child would be able to solve many of these without training examples.”
The system George and his colleagues are building was trained on 260 single character images – not even full strings from captchas. A deep neural network requires half a million actual captcha sequences to get a comparable performance.