Google’s DeepMind Artificial Intelligence Acces Atari Gaming Challenge
DeepMind has published a paper detailing how its AI tech not only learnt how to play a host of Atari games, but went on to succeed in a number of them.
Google’s DeepMind artificial intelligence unit has shown that, given little more than a few pixels to play with, its algorithm can not only learn how to play computer games from scratch – but go on to ace them after a few hours of practice.
DeepMind released a paper in scientific journal Nature this week detailing its deep Q-network (DQN) algorithm’s ability to play 49 computer games originally designed for the Atari 2600 – including a Pong-like game called Breakout, River Raid, Boxing, and Enduro – and do as well as a human player on half of them.
The Nature paper builds on previous work from DeepMind detailing how the algorithm performed on seven Atari 2600 games. While the system fared well compared to a human player, it lagged flesh-and-blood gamers when taking on the classic Space Invaders because the algorithm had to work out a longer-term strategy to succeed.
What is hidden behind the Deep Mind? Illustration by Elena |
A video of DeepMind founder Demis Hassabis demonstrating DQN playing Breakout was posted on YouTube in April last year. At first, the algorithm struggles to return the ball but, after a few hundred plays, it eventually learns the best strategy to beat the game: break a tunnel into the side of the brick wall and then aim the ball behind the wall.
The system now excels at a number of games including Video Pinball, Boxing, Breakout, and Star Gunner, while its performance lags humans on Ms Pac-Man, Asteroids, and Seaquest.
“Strikingly, DQN was able to work straight ‘out of the box’ across all these games – using the same network architecture and tuning parameters throughout and provided only with the raw screen pixels, set of available actions, and game score as input,” Hassabis and co-author of the paper Dharshan Kumaran said in a blog post on Wednesday.
The pair add that DQN incorporates deep neural networks with re-enforcement learning.
“Foremost among these was a neurobiologically inspired mechanism, termed ‘experience replay’, whereby during the learning phase DQN was trained on samples drawn from a pool of stored episodes – a process physically realized in a brain structure called the hippocampus through the ultra-fast reactivation of recent experiences during rest periods (eg sleep),” they said.
This season’s must-have styles are here
As the leaves begin to change, so does your wardrobe. Add some new looks to your arsenal with amazing deals on the latest fall styles for women.
Google acquired the DeepMind last year for a reported $400 m and has since teamed up with Oxford University for joint research into AI. DeepMind was initially developed with financial backing from Tesla Motors’ CEO Elon Musk, who last year said he took a stake in the business with the hope of steering AI away from a Terminator-like future.
DeepMind said that its AI tech could end up helping to improve products like Google Now. “Imagine if you could ask the Google app to complete any kind of complex task: “OK, Google, plan me a great backpacking trip through Europe!”
No comments:
Post a Comment
You can leave you comment here. Thank you.