Saturday, July 30, 2011

Here’s How U.S. Spies Will Find You Through Your Pics

By Richard Wheeler - WIRED
July 29, 2011



Iarpa, the intelligence community’s way-out research shop, wants to know where you took that vacation picture over the Fourth of July. It wants to know where you took that snapshot with your friends when you were at that New Year’s Eve party. Oh yeah, and if you happen to be a terrorist and you took a photo with some of your buddies while prepping for a raid, the agency definitely wants to know where you took that picture — and it’s looking for ideas to help figure it out.

In an announcement for its new “Finder” program, the agency says that it is looking for ways to geolocate (a fancy word for “locate” that implies having coordinates for a place) images by extracting data from the images themselves and using this to make guesses about where they were taken.

More and more digital cameras today don’t just take pictures but also capture what is called metadata — often referred to as data about data — that can include everything from when the picture was taken to what kind of camera was used to where the it was taken. This metadata, often stored in a format called EXIF, can be used by different programs to understand different aspects of the image — and also by intelligence analysts to understand different aspects of the user who took it, and the people who are in it. Like who they are, what they are doing, and where and when they did it.

Sounds great! But there are a few small problems.

First, not all images are digital. Those old pictures of your parents that you scanned? No metadata. Also, not all digital image formats support metadata. That BMP file you’ve got from 1996? No metadata there, either. Next, even if the image format supports metadata, not all digital images are captured with it. Or they are, but they aren’t captured with a full set. That picture from your old-model Flip phone? No metadata there, or not enough metadata. Also, many popular websites — for example Facebook — strip EXIF tags. So it’s not possible to get the metadata unless you can somehow get access to the source file — which means hacking.

All that means that there are a lot of images out there with no metadata and/or with metadata that you can’t get to very easily. But these images might still have visual information within the image, or other clues, that could enable a system — either completely automated or using automated and human processes together — to make a guess about where the image was taken. The best case for intelligence analysts would be a fully automated system. This way they could suck in images from a terrorist website, download them off of captured cameras or cell phones, or scan them from hard copy, and feed all this through the system and get locations of where the images were taken. With more and more images being created in our world every day this automated approach is going to be crucial.

You can already see a little bit of this happening with the new Google Image Search. The new Google Image Search has a “reverse image search” capability that enables you to search for other instances of the same image on the web. In most cases, this is limited to the exact same image. For example, open up Google Image Search into a second browser window and drag in this image:

No matches. So is this helicopter flying over Khost Province in Afghanistan or flying over the back side of the Hollywood sign? Hard to tell from the image itself. And if you test out typing both “Khost Province” and “Hollywood” into the search bar, you’ll get results that point in both directions. Even for a trained human analyst, this might prove too hard to crack (although the lack of rocket pods on this helicopter makes a good case for this not being an MH-6 Little Bird, which points to Hollywood over Khost).

But for some places that have been photographed over and over again, Google can guess where the image was taken. Drag this into Image Search:


If you didn’t guess already, or if you’re still figuring out Image Search, or if you’re impatient, or if you’re just lazy, here’s a hint: It’s the Grand Canyon. Not too hard for Google to guess because so many people have shot it. When it works like this, Google Image Search is almost like a biometrics program for places.

There is also a middle ground where there will probably still be a place for the human, probably with the images that also have some text data associated with them, where skills of not just pattern matching but intuition will be useful.


The caption for this image reads “An Mi-17 helicopter flies to Kabul, coming back from a humanitarian assistance mission in Baharak, Badakhshan province, Afghanistan.” If you didn’t know it was Afghanistan you might think you were looking at the Sierras, but once you know it’s Afghanistan, and Badakshan province, and near Baharak, and taken on a flight from Baharak to Kabul, and you take a look at the big peak in the background and the distinctive runoff pattern in the foothill at the bottom of the frame, a trained analyst might be able to poke around in a 3D visualization program like GoogleEarth and say that the picture was taken around here:



Iarpa will probably look for combinations of both of these approaches, but on an industrial scale. It’s a hard problem, but even now we are starting to see the beginnings of the solution even in the commercial world. And you better believe that it’s not just spooks who want to know where images were taken. Google, Facebook, Apple and all the other internet and social media giants are probably looking to do the same thing so that they can better understand where their users are and what they are doing there.

So before long your Facebook or Google+ account will be automatically tagging who is in your pictures and where they were taken…

…and spooks might be, too.

Photos: 55th Combat Camera, U.S. Army; Richard Wheeler; Jonathan Zander/CC; 438th Air Expeditionary Wing, USAF; Google, Cnes/Spot, DigitalGlobe, Europa Technologies; Google, Digital Globe, Cnes/Spot, GeoEye

No comments:

Post a Comment