Google Design: What does “human-centered AI” mean to each of you?
Jess Holbrook: Right now, AI is the new flashy technology. We’re in the early days compared to the design eras of personal computing, the web, and mobile, but we’re still seeing the same trends we saw with previous waves of technology: You experiment, your product looks really cool, and then you stumble by building an experience that doesn’t address a real human need or aspiration. We’re reminding people of what we know works, which is to put people first and work from there. It’s an evergreen approach—if you start with people then any exploration, product design, or research you do will have a fruitful path.
Rachel Been: Human-centered AI is also about managing the unpredictability of AI and ML. As designers, we need to be flexible and ready to react to new questions: What if the user gets an error? What if she wants more transparency into what the AI is doing? How do you onboard her, so she understands it? With more traditional design patterns, you have a linear progression of the user experience. With AI, we have a different set of considerations.
Jess: Right, because AI allows you to escape the scale of cause-and-effect relationships that humans are used to. But taking this technology that operates beyond the human scale, and explaining it so people can actually understand it—that’s a fundamental human-centered AI design challenge.
Öznur Özkurt: How we explain the technology is an interesting matter. When we first started working with AI and ML, we thought we’d need to show users the inner workings of the algorithms in order to get them to use the technology: where data comes from, and what calculations come out of it. But we found that people don’t necessarily need to understand the math behind the algorithm to trust it; the algorithm can show the user what it’s thinking by outlining what it sees. In our work on eye disease, for example, machine learning models in digital imaging can pick out signs of a condition, like lesions or irregular fluids, and then make a recommendation to the clinician on the condition that might be developing. You can make sense of the result without needing to fully understand the calculation. We want to create a narrative that’s less like a user manual, more like decision-making support for the user.
Rachel: When we designed patterns for ML Kit, there were moments in the demo experience when the Object Detection API—which uses visual search to identify an object—would immediately recognize an object, with no latency or no delay. It was actually a terrible user experience because it worked too quickly for the user to comprehend. It brings up this question of whether we should cater to people’s idea of how computation works, and inhibit speed in order to give the user a moment to understand the action occurring. It’s important to not heroize technology, and instead see it as a tool that can lead to a better experience for the user.
Google Design: The idea of seeing AI as a tool gets at another shift for designers, which is the introduction of AI as a design material. How do you interpret that?
Rachel: I’ve been thinking about this: Is artificial intelligence a material or is it a tool? It’s malleable and can mold to the user like a material, because it can “remember” the user’s inputs. But we’re also using AI as a tool to shape front-end user experiences.
Öznur: Our research team focuses on creating algorithms that are specifically tailored to predicting health conditions, so that’s an example of AI as a material. Whereas a group like Google Photos, if they add intelligent thinking to their product’s search feature to create a new sorting function for your images, then the AI becomes more of a tool for people to use.
Jess: I typically think of materials as having useful boundaries—you know when it breaks. You know label makers, where you punch in letters on plastic and little white letters punch out? It’s a beautiful example of using the limitations of a medium as the design, because you essentially use a failure case of applying too much pressure to another material to create the interface. On the AI side, sometimes we kick around that idea: How can you show the limitations of the material to help people understand its capabilities better?
Rachel: We’ve already been able to play with limitations in machine learning for interesting purposes, specifically in art—experiments like creating music or an entire science fiction movie that’s auto-generated by ML. The limitations of the technology create an Uncanny Valley effect that’s a little awkward, but that’s the part we appreciate as art—we perceive it as interesting. When a similar off-ness occurs in a utilitarian context, like I can imagine it might for Öznur in healthcare, or when we’re trying to be assistive with something sensitive, those limitations feel very unfortunate. It depends on the use case: The limitations can be fun and artistic, but when it comes to something like healthcare, that’s not the case.