06 June 2016
In March this year, AlphaGo, a machine created by Google’s artificial intelligence (AI) arm, DeepMind, trounced Mr Lee Sedol, a grandmaster at Go, the ancient Chinese game. AlphaGo used cutting-edge AI to beat a player acknowledged to be one of the greatest ever.
For Go aficionados, the game will never be the same again, just as chess was changed when IBM’s Deep Blue beat then world champion Garry Kasparov in 1997. That year, it was widely thought that while machines could master chess, beating the world’s best at Go – a far more complex game with near-infinite variations of play – was still several decades away.
Deep Blue used brute-force calculation and sheer computing power to beat the reigning world champion. Not so with AlphaGo – a complex machine which used deep neural networks and reinforcement learning, independent of human input. The machine learnt on its own as it progressed and got stronger as it played (it seems too that it may have learnt most from the single game it lost). Seasoned Go players marvel at the complexities of AlphaGo’s play – it baffles experts and has the potential to change Go (even human-human play) for good. There was none of these subtleties present in the Deep Blue-Kasparov match.
What AlphaGo has shown is that advances in AI once thought to need several decades to be made can be compressed into a few years.
Change is happening at a very fast rate and policymakers may not have the luxury of time to adjust and to make decisions.
It is time to start thinking about what exactly this all means for us as individuals and for humanity as a whole.
… Dr Shashi Jayakumar is head of the Centre of Excellence for National Security at the S. Rajaratnam School of International Studies, Nanyang Technological University.
CENS / Online / Print
Last updated on 06/06/2016