Wednesday, January 27, 2016

Google beats Facebook to Go game software – SPIEGEL ONLINE

Little Time? At the end there’s a summary text.


It can of course be a coincidence that Mark Zuckerberg has just published this Facebook post on the morning of 27 January. But very likely is not. Zuckerberg’s post goes in shorthand as: Our artificial intelligence experts have made tremendous progress in teaching the computers Go games. Then follows a certain extent world public praise from Facebook CEO: “The researcher who is working Yuangdong Tian, ​​located about six feet from my desk away I think it’s great that our AI team is so close to me, so I made. can learn the what they are working. ”

Yuangdong Tian finds the undoubtedly great. Less enthusiastic he is likely to be about the likely real reason for Zuckerberg’s public praise: The competition, the AI ​​researchers of Google, more specifically that of the bought up by Google company Deep Mind, have just won in terms of Go

Not. a single game, not even a tournament. But the race for the first program that is capable of human professional players actually beating.

“professional human players defeated”

This success also includes the prestigious scientific publication on the subject. Because the researchers led by David Silver and Deep Mind-founder Demis Hassabis publish now in “Nature” published an article entitled “With tree search and deep neural networks mastering the game of Go”. (For information on keyword neural networks can be found in the box below.)

The most triumphant and most important for the visibility set the layman is difficult to understand the research report can be found at the end of the abstract: “This is the first time that a computer program a professional human player has beaten in a complete game of Go – a feat which was believed until now, it was still a decade away “

Yuandong Tian and Yan Zhu. that work just not for Google, but for Facebook to AI problems, a Go software have the help of a neural network created, which is supposed to be superior to commercially available programs: a clear 5: 0 win against the Go-Pro Fan Hui have but not to offer. Born in China Fan, who now lives in France, has won the title each of the European Go-Master in 2013 by 2015. The first of five games against the AlphaGo baptized software lost supporter, with the other four he gave sometime in the course of the Game

Facebook’s products:. Brought up again up to date

The Facebook team has not released its Go-machines only in the form of a discussion paper. The paper has Yuangdong Tian yesterday, Tuesday, when it was already known in the art that the “Nature” -Paper stands before the release, brought once more up to date.

Go is one of the current supreme disciplines of AI research, similar to chess it once was. So far reached Go programs at most the level of good amateurs, simply because the game much more complex and diverse than chess: But the board has 19 x 19 positions in chess, there are known to be only 8 x 8. The total number of possible game positions in Go larger multiples.

Go is played by two players who alternately place round, flat stones on the intersections of the lines on the board. The aim of the game is to occupy large territories as possible in the game. Who completely surrounds a group of enemy stones with stones of their own color, this may take from the board.

Go dates back to ancient China, until now the game, especially in Asia is extremely popular. Due to the enormous variety of possible moves and constellations it was with previous methods particularly difficult of computers to simulate.

1200 CPUs, 176 graphics card

Google’s Deepmind- Team cracked the problem by several neural networks combined it with an already known from other programs Go technology. One of the networks has been trained by being confronted with real trains from playing a large database of professional games. Another played repeatedly against itself. The result is a “value network”, which assessed the respective constellation on the board and a “policy network”, the right trains chose. Combined these two networks with a classical decision tree method called Monte Carlo Tree Search.

With this combination of elements, the researchers ensured that AlphaGo “had to assess thousands of positions less than Deep Blue that in his chess did -match against Kasparov, “she write in” Nature “. The positions would be valued thanks to the intelligence of the “Policy Network” smarter selected and thanks to the “value network” assessed precisely. This approach might lie “closer to the human way of playing.” However, the computing power that it is used, is more superhuman: In its strongest version AlphaGo works with more than 1,200 central processing units (CPUs) and 176 other graphics cards (GPUs), which are particularly good at parallel processing operations.

For March already the next test of the digital miracle-Go plans – AlphaGo will compete in Seoul against the 1983-born South Koreans Lee Se-dol, a player who holds the highest attainable Go rank. He is nicknamed “the unbeatable boy”. Sedol is considered one of the best, if not the best living players in the world.

And there may eventually still another great duel: between Google’s software and the artificial Go players from Yuangdong Tian. Facebooks AI researchers could then still a little fame earn – or lose even a second time


To summarize:. Deep Mind celebrates a breakthrough in the field of artificial intelligence. One of the software purchased by Google company has managed to beat a Chinese professional in the game Go. Go is one of the current supreme disciplines of AI research, similar to earlier chess. Google comes with its success a Facebook team previously, which also works in the field

Neural Networks:. History

 Neural network: nodes and weighted links

David E. Rumelhart / James McClelland L.

neural network: nodes and weighted links

The idea that one could replicate the morphology of the human nervous system to something like teaching machines such as thinking, learning or perception, comes already from the forties of the last century. For a long time but remained the so-called neural network models rather rudimentary, a field for specialists with particular attention to abstraction. Then, in the first half of the eighties, this changed thanks mainly to a single study. The psychologists David Rumelhart and James McClelland revealed that as an extremely rudimentary pseudo brain can learn to form the past tenses English verbs correctly – and temporarily in the course of the learning process makes the same mistakes as a human child at the same operation. Instead of “went” threw the net in response temporarily “goed” from – so it applied the rule correctly, but just for one irregular verb

So the network learned rules and then the exceptions to these rules -. Without a only rule has ever been formulated explicitly. The study sparked in cognitive science from a small boom, suddenly neural network models have been applied to all kinds of problems, the term “connectionism” for the new science came on. Then came the Internet, the digital revolution took its course, and suddenly there was computing power and corresponding computer galore. Today neural networks are no longer just models for psychologists – they are that become powerful tools in the hands that want computers to do the seeing, thinking, interpretation

The basic principle function after such neural networks. is always the same. They consist of two or more layers of nodes, the simulated neurons. Are attached to the layers with many links. Typically, each node of a layer is connected to all nodes of the next. The input nodes are available for elementary features, they could, for example, the pixels of a predetermined image represent.

If an input node is activated, he handed this activation further about his connections to the nodes of the next layer. The compounds were weighted – you can imagine thick as different. The thicker the compound, the stronger the activation, which arrives at the next node. Instructed is a network to some extent reverse: If the output layer is not producing the desired result, the weights of the connections by using a mathematical mechanism layer by layer can be adjusted so that the result is better suited to the actual input next time. With many passages which networks can learn to link inputs with outputs correctly.

LikeTweet

No comments:

Post a Comment