Artificial Intelligence is Go Go Go
18 Mar 2016 2:26 am by Megan Burnside
Google Artificial Intelligence (AI) start-up DeepMind has been making headlines for the significant progress it has made in machine learning, after its 'AlphaGo' program beat an 18 time world champion Go player in a four-to-one victory.
Bringing AI to the masses
The Google 'DeepMind Challenge' – a series of five games between a computer and a world champion at the ancient game of Go – has captured media attention, and marks a major milestone in artificial intelligence. The AlphaGo program lost just one game of five against its human competitor and has prompted a debate about the future of machine learning in the media.
Background to a very public challenge
Founded by three London entrepreneurs in 2010, DeepMind was purchased by Google in 2014 – its largest European acquisition to date. DeepMind creates artificially intelligent algorithms capable of learning directly from raw experiences or data.
AlphaGo's victory was the first time an AI computer program has defeated a professional Go player. The success of Google DeepMind's program demonstrates the power of AI programming; the rules of Go are simple, but possible positions in the game outnumber atoms in the universe.
Computers versus humans
This is not the first time an AI computer program has beaten a professional human player. In 1956 Los Alamos Scientific Laboratory's 'MANIAC' program became the first computer to defeat a human in a chess-like game. Since then, computers have regularly been defeating top human players in well-publicised events. In the 1980's and 1990's, IBM's 'Deep Blue' supercomputer beat the leading grandmaster chess champions several times in a row.
Until recently, AI programs were focused on very specific tasks. They performed incredibly well at these tasks, but little else. What is significant about Google DeepMind is that its AI is general – it can perform well across a wide variety of tasks. This allows it to mimic human-like skills, such as analysis and intuition, and to be effectively applied to technology like facial and speech recognition.
Bringing AI to the mass media
For many people the technicalities of AI, and subsequently how it affects our daily lives, are a mystery. 'Specific' AI programs, however, are all around us. Software that detects credit card fraud, predicts shopping behaviour, or recommends music or films to you – all is fundamentally based on specific AI algorithms. American computer scientist John McCarthy, one of the founders of AI and the person who coined the term, said: "as soon as it works, no one calls it AI anymore".
AI in the media
Whether intentionally or not, the Go challenge has done more than simply demonstrate what AI can achieve. It has created a global debate over what machines can do and how they will impact human life. Using the Nexis media monitoring tool to analyse media profile of Artificial Intelligence in the UK national press, it is clear that the Go challenge has raised a significant debate in the national media.
Charts produced using Nexis Analyser show that historically, there were less than one hundred articles published on AI each month from March 2014 to March 2015.
2015 – 2016 has been a different story: AI has taken on a life of its own, with media interest steadily increasing. The lead up to the Google DeepMind Challenge has sparked the imagination of the UK media, and national press references to AI have almost tripled. In October 2015, AI's media profile reached 'Go' proportions, when Professor Stephen Hawking warned that artificially intelligent machines could wipe out the human race if they become too clever during an 'Ask Me Anything' session on Reddit. But even this peak has been surpassed by the recent interest in the Go project.
Whilst the Go competition was undoubtedly a success for Google's technical team, its marketing people must also be celebrating; the competition demonstrates that a well-managed publicity stunt can deliver significant media interest and facilitate a debate even around complex issues such as machine learning and the future of computing.