HomeContactServicesWorksContact

ALGOLIA

A synonym recommendations system driven by machine learning

SUMMARY

Algolia provides search as service, and machine learning is currently one of the key competitve advantage that can make a big difference between key players in the search industry. To leverage this opportunity, I was task to design the first feature that would use our clients users’ signals so they can get personalized synonyms recommendations based on the vocabulary their users’ type when searching. The solution dramatically increased the conversion rate to the premium plan, and set the path for more AI features to come.

DELIVERABLES

  • Ideation Worshop
  • User + AI Workshop
  • UX Strategy Blueprint
  • Interactive Prototypes
  • Usability Test Reports
  • Production-Ready Screens

ROLE / TIMELINE

  • 6 months
  • Sole Designer in a team with 1 UXR, 1 PM, 3 Engineers & 2 Data Scientists

THE PROBLEM SPACE

The lack of AI features is a deal breaker in the search market and an even bigger lost opportunity for the user experience.

For Algolia, the lack of AI/Machine Learning capabilities was the product limitation leading the second biggest average lost deal size versus competition. As a few competitors was already implementing AI solutions for their customers, it was important or Algolia to keep up with it and provides its own set of solutions. Also, from the user experience perspective, Machine Learning was a great source of opportunity to help our users set up their search experience in a more efficient way by providing a much more personalized experience.

It’s hard and time-consuming for e-merchandiser to predict the vocabulary their users will use when they search for something, as it’s always changing and evolving.

E-merchandiser users' needs, how they express them and the way they explore the content is constantly evolving. There’s a lot of chances that the vocabulary used by e-merchandiser when setting up their search is not the same as the vocabulary used by their users when they searched for something.To bridge that gap, Algolia already provided a feature that allow them to manually add synonyms (making different search terms leading to the same results). But to keep track with how their users are searching, first they need a way to analyze their behaviors and second they have to repeat that task over time. This can be painful.

How might we leverage a machine learning model to help e-merchandiser fill the gap between the vocabulary they use setting up their search experience and the vocabulary used by the ones searching for something?

THE APPROACH

A Feedback Loop Ideation Workshop

We knew the main difficulty for e-merchandiser when it comes to fill this vocabulary gap is that it changes so much over time, and that it relies a lot on their users behavior. In other words, it needs to be a never-ending discussion between the e-merchandiser, their users behavior, and what the AI keep learning. To illustrate this conversation and what it means for the user experience, I used the feedback loop approach, and the first step to leverage it was to run an ideation workshop around it, making key collaborators (user researcher, product manager, product marketing manager, data scientists and developers), think in a blue-sky scenario.

As a result, around 13 concepts has been roughly sketched to be discussed and shared across the company. It proven to be a great source moving forward and beyond to feed the story-mapping of the MVP we wanted to build, define principles around our AI-powered features, all the while illustrating and developing a long-term vision.

Slide shared after the workshop and across the company, all concepts were linked to sketches (kept secret for confidentiality). 

Defining the right interaction level

As data scientists and engineers were building the machine learning model at the same time as I was working my way to the solution, the next step was to translate technical experimentations into use cases and start focusing on the action part of the feedback loop by facilitating conversations on the right interaction level we wanted to provide to our users, going from little to no supervision at all (or in more technical terms, going from augmentation to automation).

Building the model and the user experience at the same time was a great opportunity to keep them in constant conversation. Technical experimentations can draw inspiration from use cases, and new use cases can be considered with new technical experimentations. So once again, running a workshop involving data scientists and engineers was the way to go. Preleminary, interview were conducted and survey were sent with a user researcher to assure our discussions were not just based on assumptions.

As a result, we were ready to craft the final story-mapping of our MVP, new technical experimentations were considered, and I was ready to explore solutions with the right interaction level and a strong sense of the model limitations and capabilities.

Slide shared after the workshop and refered to during the design process to evaluate the right level of interaction and why. The slide and exercice has been repeated for all model capabilities and key user interaction.

Principles to guide us along the way

As the first machine-learning feature designed by Algolia, it was absolutely key to define and diffuse principles that would ensure alignement along the design of both the feature and the model. Though they were built to provide alignement on this feature specifically, it was also an important step to start figuring out what Algolia values would grow to be regarding their approach to the intersection between AI and our users.

We strive to be a withebox
As much information as possible should be provided for users to help them take informed and good decisions. Users should always be able to understand why and how recommendations are generated.
We’re an assistant, not a fully-automated solution
We believe our users expertise into their business should be enhanced and in conversation with our solution, not replaced.
Design for speed and motivation
As the model is growing and still create noise, synonyms should be easy, quick and rewarding to review.
Strive for quality over quantity
The model and the interface should always strive to remove the noise and focus the attention on the best synonyms.

THE SOLUTION

A check-them-all incentive and a card system for suggested synonyms to deal with all informations and interactions needed

After many user testing and iterations to display all informations and interactions needed for each synonyms, I went with a card system. Each card provides a synonym suggestion, the exact set of words rewritten, the key metric to consider, the confidence score, the option to change the type of synonym, and four main actions to review the synonym : accept, deny, edit and compare. All these informations are key to allow users to easily make both decisions and actions, quickly and directly where they are without leaving the screen.

Not only cards help to provide a clear information architecture to achieve this, further leveraged with dynamic informations and hover states, but it also create an incentive to “check them all” : once a card is reviewed, it moves out from the suggested synonyms tab so you can clean it and get the work done until the next batch of suggestion.

Because the impact of synonyms often need to be monitored, history of past actions is kept through top tab navigation, so users can go back to their decisions or adjust it if needed.
To always get the most interesting synonyms recommendations on top, users get to sort them by key metrics (number of rewrites, volume of search), and by how confident the model is about these recommendations.
To avoid informations overload and make sure the right one is displayed only when users need it, I leveraged different states and made some informations dynamic: main actions only appear on hover when the user is focusing on a specific synonym (replacing the generation date) and the key metric is based on the sorting selection.

A seamless integration of edition and education for all different types of synonyms

The challenge in adding an edition feature when it comes to synonyms type was triple : it’s a lot of information to provide in an already busy area, it’s a mandatory choice for users to make that could hurt the speed of the overall review process, and the complexity of what all types of synonyms actually are means education is needed.

To solve this, I translated all type of synonyms into icons, placed them directly between the two differents words of each suggested synonym, and finally turned them into a dropdown. Futhermore, I added a side panel when hovering for each option, providing education directly in the interface. This way, users can easily see what type of synonym is currently selected, change them on-the-go, and learn more about them at the same time. The recommended type of synonym by the model is selected by defaut.

Starting to close the feedback loop by sharing metrics and impact before and after the action

One of the key information we can provide to users in order to help them make better decisions is metrics about synonyms we are suggesting. They are especially useful in two moments: when the user is trying to review a synonym, and when a user is trying to understand the impact of a reviewed synonym.

I added a side panel opening from the “compare” button for each synonym, so users can get key informations about the two words of the suggested synonym, alongside popular rewrites around them to provide even more context. The side panel navigation allows to navigate between synonyms without closing the side panel.

And finally, to start closing the feedback loop, in the “accepted synonym” tab, feedback cards notice when the synonym badly affected metrics, suggesting a new recommendation that the user can quickly act on.

NEXT PROJECT