Cameras make the best sensors, all eyes on Amazon Go
I noticed with interest the announcement of Amazon Go, the concept grocery store that will open in Seattle in January and use sensors to record what you’ve taken from the shelves. If you haven’t seen how it works yet, watch this:
Users scan their phone upon entry, take things off the shelves, then just walk out – and whatever they’ve taken will automatically be billed to them.
Amazon does not define exactly how they are making this bit of magic work, but they mention “machine learning, advanced computer vision, and AI” in their video. In one of those fun coincidences that happens from time to time, during the same week that Amazon Go launched, I came across a blog by AT&T CTO Andre Fuetsch that highlighted some of the work they were doing in the AT&T Foundry that led them to realize just how much information could be gathered with cameras in place of individual sensors. This blog explained some of what Amazon might be doing, and got me thinking about the implications this kind of Amazon Go shopping experience would have for the network.
The particular case being addressed by the team in the AT&T Foundry was was how best to monitor soft drink refrigerators in convenience stores. Their first approach was to install a myriad of individual sensors to monitor things like temperature, humidity, number of cans on each shelf, and so on. But as the project progressed, they discovered that all of the information they wanted to track could be deduced using a combination of video camera inside the cabinet and automated analysis of the data stream. How many cans are there, and are they in the right places? Easy to observe in the photo. What about temperature and humidity? Seems you can calculate that based on the size and quantity of the water droplets condensed on the cans. (Cool!!) And buyer info - who is buying which drink? The video of the hand reaching into the cabinet tells you almost everything you want to know. Take a look at that picture of my hand reaching for a water bottle. It can be reasonably deduced that I’m female, not young but not elderly (good analysis would probably narrow this down considerably), Caucasian, and further deductions can be made based on store location and time of day. And remember, this is completely anonymized data.
Based on this experience in the lab, Andre Fuetsch is convinced that video uplink is going to be a very big part of IoT, and the launch of Amazon Go makes me think that he’s right. This means it’s time to recalibrate how we think about IoT. Instead of only imagining a gazillion little sensors everywhere (though there will still be plenty of them), now add a healthy dose of video cameras pointed at all manner of things. A fat uplink pipe will send those pictures to an AI-driven analysis center to draw conclusions and generate automated responses based on a whole range of measurements that can be deduced from those high-resolution images (meaning that the video quality has got to be good enough to see how many millimeters across those water droplets are?). Water droplets the wrong size? Increase the refrigerator temperature. Mountain Dew running low? Roll the soft drink truck. Not enough teenagers buying Pepsi? Time for a new marketing campaign.
One camera and smart analytics in place of many individual sensors: that’s the IoT of the future. And all I can say is that this is going to be amazing.
Visit our website for more on IoT including:
Nokia is committed to increasing quality of life through meaningful metrics. Check out our new Nokia Crowd Analytics website.
Share your thoughts on this topic by replying below – or join the Twitter discussion with @nokianetworks using #IoT #analytics