Machine learning works best when users don’t have to think twice
Machine learning is the science of helping computers discover patterns and relationships in data. Two of the most common ways products use machine learning (ML) today are predictive recommendations and personalization. If you’ve checked out a recommended video on YouTube, then you’ve already experienced these features for yourself. And if you’re a UXer, perhaps you’re already incorporating ML into your own designs.
At their best, ML-driven recommendations and personalized features save time and effort by proactively delivering the content users want without forcing them to navigate an interface or search. However, with the wrong execution, providing even the most accurate suggestion or the most relevant list of recommended items could actually require more time and effort from the user.
To understand why this happens, and how to avoid this pitfall in your own designs, start by embracing a phenomenon known as habituation.
To habituate is human
Habituation is what happens when a behavior becomes so ingrained that you can perform it without having to think about it. It’s knowing the exact location of the dishes in your kitchen or which pedals to push in a car. It’s the ability to read a paragraph without having to first sound out every letter, combine letters into syllables, syllables into words, and words into phrases.
Habituation is the result of a neural process called long-term potentiation (LTP). When the same neural pathways in your brain are repeatedly activated, it triggers physical and chemical changes to neurons to make the transmission of signals along that pathway more efficient.
It feels good when we can move through environments and accomplish tasks without thinking. This is because it was (and still is) adaptive for humans to automate as many actions and decisions as possible, so we always have plenty of cognitive resources at the ready to address any spontaneous new problems that arise. Thousands of years ago that spontaneous new problem was figuring out how to keep your tribe safe from wild animals; today it’s dealing with a tricky budgeting issue at work or navigating a new route home because of construction. (If you're interested in digging deeper, there's an entire field of study dedicated to this phenomenon called behavioral economics, pioneered largely by psychologist Daniel Kahneman.)
Habituation also helps us get into a flow state (a phenomenon first described by psychologist Mihaly Csikszentmihalyi in 1988), in which we’re fully immersed in what we’re doing—to the point of losing track of time—rather than how we’re doing it. It’s in flow state that we often do our best work.
Interface habituation + ML
As UXers, habituation is something we aspire to achieve in our designs, and something that the best designers seem to grasp intuitively. Great UIs facilitate habituation by providing consistent, simple ways for the user to navigate so that they can very quickly learn to perform UI actions without thinking.
For example, the original iPhone’s simple press → swipe → tap revolutionized the way users navigated UI on their smartphones by removing the complex, redundant, hierarchical menus that made habituation a challenge on some phones. Similarly, gamers can effortlessly navigate UI across multiple generations of the Playstation and Xbox consoles thanks to the consistent use of one button for “OK / Confirm” and another button for “Cancel / Back.” Moreover, the nearly identical physical position of these buttons on each system’s controller makes it easy for gamers to habituate to playing on either system.
To see how ML features could trip up habituation, let’s contrast a “traditional” interface with a “smart” one driven by machine learning. Imagine a mobile interface that contains a set of 20 items, labeled A–T, and arranged in a grid that scrolls vertically. If the user needs item J, then the first time they use the interface they’ll examine all the items, scroll, then examine some more until they finally find and tap item J. This is four steps in total–examine, scroll, examine, tap.
After a few uses, though, the user learns the location of item J, so the “examine” steps go away. This reduces the navigation to two steps–scroll, tap. Once fully habituated, the user becomes faster at physically performing the scroll and tap gestures and can do so without thinking, which further reduces time and effort. Thanks, habituation!
Now imagine that you want to use machine learning to personalize the UI for the user and make navigation easier. In this new interface, an algorithm first predicts which items the user is likely to want at a given moment and rearranges the interface accordingly, putting the strongest predictions at the top.
In this “smart” interface, even though the scrolling step goes away when ML has made the right prediction, the examining step never disappears. This is because the UI is essentially new to the user each time. Even if the algorithm gets smarter over time, such that the desired items frequently appear at the top, the user must still examine the UI to verify that the desired item is indeed there—instead of tapping automatically.
Evaluating new information and performing a visual search are inherently effortful mental operations that can’t be automated. For habituation and automation to occur, it’s critical that the exact same pathways in the brain are activated again and again. If the UI looks different every time a user sees it, then the automation process is blocked.
The pushback I often hear to prioritizing habituation at the expense of ML is, “Yeah, but when the machine learning algorithm works perfectly it’s amazing because the thing you want is right there!” Unfortunately, machine learning algorithms will never reach perfect accuracy, because they’re prediction tools by nature. Therefore, perfect prediction shouldn’t be used as the baseline for evaluating experiences. Even if the algorithm is highly accurate, the user will still have to evaluate the UI to check the predictions. Having “evaluate” as a step in the navigation process will always prevent the user from fully habituating to the UI, and removing the magic of habituation will never make for a truly “amazing” experience.
Using ML wizardry without losing the magic of habituation
It’s easy to feel like new ML technologies cause us to rethink everything about UX design, but that’s not quite true. The emergence of ML doesn’t change the fact that the most usable, delightful UIs are those that embody principles of good design—like habituation—that many designers and researchers (Don Norman, Jakob Nielsen, Steve Krug, and Jeff Johnson to name a few) have been writing about for years. To help you get started, here are four principles to consider when introducing ML features into a UI:
1. Count decisions as navigation steps If your ML designs aim to remove navigation steps for users, but then require them to stop and evaluate all of the ML-generated suggestions, you haven’t really saved the user any steps (or time). Evaluating recommendations or visually searching the interface for content counts as a navigation step, just like a tap or click.
2. A predictable UI is necessary when the stakes are high If the user is coming to your product to perform important, time-sensitive tasks, like quickly updating a spreadsheet before a client presentation, don’t include anything in your UI that puts habituation at risk. No ML-based suggestion will be “helpful” enough to offset breaking your user’s flow state and muscle memory. But if you’re confident that the user has a more open-ended goal like exploration, you have more leeway to put dynamic, ML-based features at the forefront of your UI.
Let’s look at an example. Google Play Music strikes a good balance between habituation-friendly UI and algorithmically-generated recommendations. A music app UI could present users with an alphabetical library of hundreds of thousands of options (arduous, yet habituate-able), but because the user’s goal is often to browse until they find something they feel like listening to, Google Play Music dedicates the vast majority of its UI to surfacing music recommendations that change based on your listening habits and factors like the time of day. There’s also a navigation sidebar that never changes, so the user can still habituate to performing basic tasks like finding a saved playlist.
3. Be predictably unpredictable If your ML algorithm is going to make recommendations or try to personalize the interface for users, consider dedicating a specific place in the UI for this to happen, rather than building the entire UI around it.
For example, Google Drive created a feature called Quick Access, which uses machine learning to surface a few documents you're likely to need at a given moment. Rather than reordering all your content based on ML predictions, the design team created a constrained, dedicated space for Quick Access at the top of the screen. The rest of the UI remains unchanged, and you can turn the feature off if your prefer to search or navigate files without ML assistance.
4. Make failure your baseline ML algorithms will make bad predictions. Try to imagine what a user’s process for completing the action without ML assistance would be, as well as the user’s process to correct a potential ML failure. If it’s more work for the user to correct a failure than it is for them to complete the process without having the assistance of ML in the first place, then machine learning is not actually creating a better experience.
Gmail’s Smart Reply, for example, uses machine learning to suggest short replies to your email messages. The UI makes these suggestions unobtrusive and easy to ignore if you prefer to write your own. Imagine an alternative design in which the reply is instead inserted into the message text field, forcing you to erase or edit if it's not helpful. This would be far more work than manually writing a reply without ML assistance, and it would be impossible for you to habituate to the reply-writing process.
Keep it simple
People will always enjoy experiences that make something easier. Machine learning can delight users by predicting exactly what's needed at a given moment, be it a snappy reply to a colleague’s email or the seemingly serendipitous discovery of a favorite new song, but it can also disrupt habituation (arguably a delightful experience unto itself) by introducing randomness and distraction. As designers, our job is to understand the difference, knowing how to harness the “magic” of the human brain and when to task a machine with the more difficult work—ensuring that the people who use our products can effortlessly get to the things they care about.
Kristie J. Fisher, PhD, is a UX researcher at Google who’s worked on Hardware and G Suite, including ML productivity tools like the@meet bot. She’s currently working on the Ads Planning team and is based in Venice Beach, CA.
For more on how to take a human-centered approach to designing with ML and AI, visit ourfull collection.