Friday, March 4, 2016

Google PlaNet: software detects where a photo was taken – Mobilegeeks

robotics, artificial intelligence and smart software algorithms – in many technology areas are being made amazing progress and ensure that we both are excited about the new opportunities, at the same time but also get a little jitters of what soon will be technically feasible.

we must not gloomiest science fiction fantasies endeavor that draw us a picture of a world in which intelligence and learning ability of the machines are higher than in humans and these machines therefore could turn against us. It is enough if we continue spinning based on the current possibilities the thread of what is happening with our data.

given enough already looking into the comments on social media after a platform like WhatsApp update its terms and conditions or Facebook introduces a new feature. In addition to the joy of a new feature – even now in the newly introduced Emoji alternatives to like-thumb – there is always the fear that such a feature is to serve Facebook also, cashing more evaluable data by us

Deep Learning: algorithm recognizes places in photos

Do we want this double-edged sword – excitement about a new technology and fear of the resulting options – with reference to an actual example discussed, then comes to a new algorithm just right, we owe scientists from Google: PlaNet called this algorithm are used in the neural networks to GPS, read: to detect the location of a photograph – without this to require the supplied image information.

summarized Tobias Weyand and James Philbin Google and have Ilya Kostrikov (Rheinisch-Westfälische Technische Hochschule Aachen) its findings on PlaNet in a scientific document that you can see here as a PDF document.

What has been done here? First of all, our Earth was divided into 26,000 squares with different sizes. it is the number of images available per square, the smaller. This means that an area like the Ruhr with many major cities and the higher number would to there geknipsten photos rather associated with a small square, a less populated area, however a larger square.



the system was then fed with 29.7 million publicly available on Google+ photo albums. The algorithm was trained with 490 million photos, 91 million of the total of 126 million available photographs served to validate the results. If PlaNet so purposed an image, it can now assign the image content without access to bundled geodata one of these squares, so therefore assign a place.

Knipse I before Big Ben or the Grand Canyon, it is easy, the place you can see at which the photo was taken for the viewer of the picture. But now we are taking a giant step further and also Geolocations will in future get delivered to images, which do not reveal immediately apparent to us on a site.

For this to work, considered the algorithm different points in the photo: Natural attractions and concise landmarks are identified and mapped precisely, but in addition also landscapes or certain blocks are detected, taking into account the architecture of buildings and typical local objects such as red telephone boxes, as well as certain plant and

 PlaNet 01

How well this works already on the basis of PlaNet algorithm? To determine the accuracy, you have fed the system with 2.3 million Flickr photos that are provided with geo-tags. The results can be quite impressive already see:

  • for 48 percent of the photos right continent was determined
  • 28.4 percent of photos could be assigned to a country
  • In 10.1 percent of photos and the city proper was called
  • at 3.6 percent, finally, the place was up on the road accurately determined

Next was also tested how proven the system when it competition measures with well-traveled people. Duels have 10 Globetrotter with extensive knowledge about our planet with the algorithm based on a game called Geoguessr. You will receive from the Street View database any image displayed and then you can set a pin from which you may believe that it marks the right place around the world.


50 rounds were completed and already the algorithm suggests humans ! 28 of the 50 rounds went to PlaNet, the system was average 1131.7 km next, while an average distance of 2320.75 kilometers to photo-place was with the human protagonists. That may not be representative at such a handful of testers, but attempts Geoguessr times itself and you will have a better idea how much power the algorithm does when on average “only” just over 1100 km is located next to it.

Incidentally, the technical requirements are kept so low that the processors and memory in our smartphones easy enough to with superhuman abilities to provide future members the possibilities of this neural network.

Our model uses only 377 MB, Which even fits into the memory of a smartphone Tobias Weyand

Where is the journey?

software such as Google Photos or facial recognition in Facebook now show already how reliable people or objects can be detected. the

Basically, is absolutely welcome, offers many opportunities and can often make our lives easier. At the same time dangling because whatever this sword of Damocles over us on the “What are they doing with our data “is engraved. This is not a groundless fear, because this is not only that it is detected whether there is a dog in the picture is or – using PlaNet – it is recognized that a photo in Tanzania was snapped. The large US companies collect massive data of all colors and thanks to these sophisticated algorithms, this data is always swift linked.

Above I described that PlaNet at landscapes, plants – and wildlife and other characteristics in the image-oriented, but it is also possible to assign indoor shots. All it takes is for example your “London” album on Google: Looking at the Tower of London on an image and the Picadilly Circus on the next but one, the software assumes that the drunkard photo of revelry in hotel rooms between the two different images

was shot in London. Google can anyway pretty accurate trace when we where moved, now it works just as well because no transaction data are necessary, which alone to determine based on the pictures. Let us one step further in the accuracy of such algorithms, it takes little imagination to to imagine the future possibilities of the technology: Google – or which company ever – provides a photo and recognize where we are, what beer or cigarette brands, are on the table, with which people we point at which events hang out and so much more.

Linked to all this with the otherwise collected data, we get closer and closer to the glass People. I think I’m in this regard so knit a bit naive and think therefore that bring us the technical achievements considerably further forward than they are dangerous to us and Google actually more good results than evil design. Nevertheless, we should be aware of what is already possible and will enable even more so in the coming years.

Therefore, once asked in conclusion to the round, like her such developments such as PlaNet is: you Understand that tend to be more than chance, makes you something more fear – or do you like me the whole thing a little ambivalent? Let it know

Source in the comments: Arxiv.org via MIT Technology Review

LikeTweet

No comments:

Post a Comment