Whether you’re whipping up a meal at home or dining out, the impulse to snap a few photos before you eat—or before your table is allowed to eat—can be hard to ignore. I'm certainly guilty of it and you might be, too. But while many of us have been busy uploading, researchers have been working on a way to turn our obsession with photographing food into something useful. That’s what’s been happening at MIT, where the university’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed Pic2Recipe, an artificial intelligence program that turns a food photo into an ingredient list and suggested recipes.
If this sounds somewhat familiar, it’s because this season of Silicon Valley features a running plotline about the “Shazam for food,” called SeeFood, which, by pure coincidence, happened to line up with a series of updates for Lens, an object and food recognition program from Pinterest that lets you use the platform’s photos as a recipe finder.
What makes MIT’s program different is its scope—Pic2Recipe relies on the growing Recipe1M, a CSAIL database of more than one million recipes combed from the Internet that have been used “to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.”
In a video demo, Pic2Recipe correctly identified eight of the eleven ingredients used to produce sugar cookies (it missed the ingredients for icing), but the AI continues to gain information.
So far, Pic2Recipe is most accurate when assessing desserts, but foods like sushi and smoothies need more data. Going forward, the CSAIL team plans to refine Pic2Recipe so it can do things like identify different varieties of mushrooms and onions, recognize a dice over a rough chop, or even a braise over a bake. An online demo of Pic2Recipe is already live, so get a taste of the future by uploading your own photo and letting the AI get to work.