Will image-search be Marketplace’s killer app?

07 Feb 2017

Facebook may have just revealed the killer feature for its new Marketplace app in the competition against established general classified players, such as OfferUp, LetGo, Mercari and Close5.

It’s a new technology that matches search requests to images — not tags and captions, but the content of the images themselves.

Facebook has been training its neural network on some of the billions of images it has hosted deep in the bowels of its members’ timelines. By comparing search descriptors added manually by members with visual features gleaned from the photos themselves, Facebook’s algorithms have been able to “learn” what certain (so far fairly common) images are.

The new technology can be used in a variety of ways.

For individual members, it makes searching for images from friends on Facebook that much faster and more accurate. (Type in “black polo” and find pictures of yourself and others wearing such a shirt.)

For advertisers, it improves targeting — an obvious win for Facebook.

And for Facebook Marketplace, it means the ability to show related items a buyer might be interested in, based on analysis of a picture rather than explicit keywords or tags a user (or advertiser) had to type in. And if users do type in a search term (for example, “black polo”), Marketplace can be that much smarter in what it displays.

TechCrunch suggested another compelling, if creepy, example: you see a picture in a status update of a friend wearing a dress you like; Facebook then connects you directly to that item to purchase on Marketplace.

Automatically organizing the gazillions of products and items that Facebook anticipates will be for sale on Marketplace represents an invaluable leg up, and it’s something that only Facebook could do — the competition simply doesn’t have the financial, human and computing resources, let alone the data, to pull off something of such scale.

Who else could do this?

Google, of course, which has its own project called TensorFlow to identify and label images. TensorFlow is open source and is said to have over 90 percent accuracy.

Pinterest is also in the game; the company recently launched “Visual Search,” which enables users to search for products within a Pin’s image.

Facebook adapted the new technology from its Lumos computer vision platform that was originally intended to help people with visual impairment by applying text-to-speech to photos. Accuracy wasn’t great at first – Lumos could tell you if a photo involved a stage and lights, but not what was happening on that stage. After a Facebook teamed painstakingly labeled 130,000 photos, Facebook can now give a more contextual description like “people dancing on stage,” the same TechCrunch article explains.

Coming next: searching inside videos not just still images.

How would you like to see Facebook use this new artificial intelligence for photos? What safeguards need to be added to ensure privacy is maintained and that automated Marketplace “suggestions” don’t become a new form of increasingly savvy visual spam?

As always, feel free to drop us a line and let us know what you think of our coverage of all things Facebook.


Brian Blum

Brian Blum covers the U.S., Canada and Israel for Classified Intelligence Report, and contributes to our special reports and research projects. Originally from San Francisco and now based in Jerusalem, he has been with the AIM Group since 2004. He is the president of Blum Interactive Media, specializing in writing and multimedia content development for online, print, video and audio. His clients include newspapers, universities and non-profits. He is currently working on a book about the billion-dollar bankruptcy of a once high-flying Israeli startup.