Every day, millions of customers search for books in different formats (audiobooks, ebooks and physical books) across Amazon and Audible. Traditional keywords AutocompleTe suggestions, while useful, usually require several steps before customers find their desired content. Audible assumed the challenge of making Book Discovery more intuitive and personal while reducing the number of steps to buy.
We developed an instant visual auto -replied system that improves the search experience across Amazon and Audible. When the user starts writing an inquiry, our solution provides visual previews with book cover, enabling direct navigation to relevant landing pages on the search result page. It also provides real -time personalized format recommendations and contains several searchable devices, such as book pages, author pages and serial pages.
Our system had to understand the user’s intention from just a few keystrokes and determine the most relevant books to appear, all with low latency for millions of queries. Using historical search data, we match keystrokes to products, partial conversion of input into meaningful search suggestions. To ensure quality, we impressed trust -based filtration mechanisms, which are particularly important for distinguishing between general inquiries such as “mystery” and specific title searches. To reflect customers’ latest interests, the system used time closure functions for long historical user interaction data.
To meet the only requirements of each usage box, we developed two different technical approaches. At audible, we implemented a deep pair-learning-to-rank (DeePPLTR) model (DeePpltr). The Deepptr model contemplates peers of books and learns to assign a higher score to the one who better matches the customer request.
The Deepptr model’s architecture consists of three specialized towers. The left tower factors in contextual features and recent search patterns using a long-term memory model that processes data sequentilly and consider its previous decisions when issued a new period in the sequence. Middle Tower handles keywords and object obligation history. The right tower factors in customers’ taste preferences and product descriptions to enable personalization. The model learns from paired examples, but at Runtime it ties it on the absolute score of the books to bring together a ranked list.
For Amazon, we implemented a two-step modeling method that involved a probable information-Terrieval model to determine the book title that best matches each key word and another model that personallyizes the book format (audiobooks, ebooks and physical books). This double strategy approach maintains low latency while it still enables personalization.
In practice, a customer who writes “Dungeon Craw” in the search box now sees a visual recommendation for the book Dungeon Crawler CarlComplete with book cover, reduction of friction by bypassing a search result page and sending the customer directly to the product detail page. On audible, the system also stays auto -replied results and enriches the discovery experience with relevant connections. These include links to the author’s complete works (Matt Dinniman’s author page) and for titles belonging to a series, links to the full collection (such as Dungeon Crawler Carl series).
On Amazon, when the customer clicks on the title, the model of the right book format (audiobooks, ebooks, physical books) recommendation and directs the customer to the right product details.
In both cases, after the customer has entered a certain number of keystrokes, the system uses a model to detect customer intention (eg book title that is intended for Amazon or the author’s intent to audible) and decide which visual widget to appear.
Audible and Amazon Books’ Visual AutoCompleTe gives customers customers with more recuant béstrole rapidly than traditional AutoComplete, and its direct navigation reduces the number of steps to find and access desired books – all while handling millions of queries at low latency.
This technology is not just about making the book discovery easier; It lies the basis for future improvises in search personalization and visual discovery across Amazon’s ecosystem.
Recognitions: Jiun Kim, Sumit Khetan, Armen Stepanyan, Jack Xuan, Nathan Brothers, Eddie Chen, Vincent Lee, Soumy Ladha, Justine Luo, Yur Fedorov, Ronald Denaux, Aishwarya Vasanth, Azad Bajaj, Mary Heer, Adam Lowe, Jenny Wang, Cameron Cramer, Emmanuel Ankrah, Lydia Diaz, Suzette Islam, Fei Gu, Phil Weaver, Huan Xue, Kimmy Dai, Evangine Yang, Chao Zhu, Anvy Wuuy Jiushan Yang