“Just Walk Out” is one innovative solution Amazon supports with advanced algorithms.
Amazon continues evolving its in-store shopping and payment experience using leading-edge machine learning (ML) algorithms.
At the recent re:MARS 2022 global artificial intelligence (AI) event hosted by Amazon, Dilip Kumar, VP, physical retail and technology, Amazon, discussed how the omnichannel giant leverages computer vision and ML algorithms in an ongoing effort to deliver easier and faster in-store shopping experiences for customers.
Following are highlights from Kumar’s presentation about how Amazon applies algorithms in its Just Walk Out frictionless shopping technology, Amazon One palm payment solution, Amazon Style physical apparel store, and Amazon Dash Cart smart shopping cart.
Just Walk Out In the case of Just Walk Out technology, which allows shoppers to skip the checkout line in many Amazon stores, select Whole Foods Market stores, and several third-party retailer stores, Amazon deploys sensors, optics, and machine vision algorithms. As a result, the company has reduced the number of cameras required in Just Walk Out technology-enabled stores to make them more cost-effective, smaller, and capable of running deep networks locally.
Amazon’s Just Walk Out sensors and algorithms have evolved to detect a broad range of products and differences in shopping behavior in full-sized grocery stores, The company has also increased the diversity of environments its algorithms can account for as it deploys Just Walk Out technology to third-party retailers.
Amazon One Initially introduced at two Seattle-area Amazon Go stores in September 2020, Amazon One is designed to let customers use their unique palm signature to pay or present a loyalty card at a store. While developing Amazon One, the retailer needed data to train and test its AI algorithms across demographics, age groups, temperatures, and variations like calluses and wrinkles that are unique to one’s palm, to enable the service to correctly determine whose palm was hovering over the device.
As Amazon started to build Amazon One, it realized the limited availability of public datasets consisting of palm and vein images to help train the algorithms. So, Amazon further advanced existing technologies to build huge volumes of diverse, realistic synthetic palm and vein images to train the AI models and prepare the solution for a wide variety of users.
Amazon Style At Amazon Style, Amazon’s physical apparel store the company built new algorithms that use the information a customer provides—such as a detail they entered into a “Style Survey” or the items they scanned while shopping on the floor of the store—to build a diverse set of recommended items, balancing similarity to their current choices with a diverse set of options.
The system also generates complementary selections, such as a shirt to match a pair of jeans to create a recommended outfit. In addition, Amazon has built synthetic datasets to mimic variations of real-life shopping scenarios.
Amazon Dash Cart When Amazon built the Amazon Dash Cart, a smart shopping cart that helps customers skip the checkout line in many of its U.S. Amazon Fresh stores, the company developed a set of computer vision and sensor fusion algorithms to detect items while in motion, including accurately capturing weight and quantity. The machine vision algorithms also have strict latency budgets, as the cart keeps track of a customer’s receipt in real-time.
“As I look back on the progress my team has made, I’m reminded of an Amazon saying that “it’s always day one,” and it certainly is still Day One for us in physical retail and technology,” Kumar said in a corporate blog post. “It feels like we’re just getting started in tackling some of the complex challenges in the physical retail world, and I’m excited to see what the team does next to push the boundaries of AI forward.”