The artificial intelligence system from London-based DeepMind, which Google acquired last year for a reported $400 million, represents a major step toward a future of smart machines.
Computers running the deep Q-network (DQN) algorithm were exposed to 49 retro games on the Atari 2600 and told to play them, without any direction from researchers. Using the same network architecture and tuning parameters, the machines were given only raw screen pixels, available actions, and game score as input.
For each level passed or high score earned, the computer was automatically rewarded with a digital treat.
"Strikingly, DQN was able to work straight 'out of the box' across all these games," DeepMind's Dharshan Kumaran and Demis Hassabis wrote in a blog post. The executives cited classic titles like Breakout, River Raid, Boxing, and Enduro.
The AI crushed even the most expert humans at 29 games, sometimes composing what the creators called "surprisingly far-sighted strategies" that allowed maximum scoring possibilities. It also outperformed previous machine-learning methods in 43 of 49 instances.
"This work offers the first demonstration of a general purpose learning agent that can be trained end-to-end to handle a wide variety of challenging tasks," the researchers said. "This kind of technology should help us build more useful products."
Imagine asking the Google app to complete a complex task—like plan a backpacking trip through Europe, for example.
Google's DeepMind also hopes its technology will give researchers new ways to make sense of large-scale data, opening the door to discoveries in fields like climate science, physics, medicine, and genomics.
"And it may even help scientists better understand the process by which humans learn," Kumaran and Hassabis said, citing physicist Richard Feynman, who famously said, "What I cannot create, I do not understand."
For more, see How DeepMind Can Bring Google Artificial Intelligence to Life in the slideshow above.