Trusting Driverless Cars

How Waymo designed an experience that reassures riders every step of the way

For the past four months, a fleet of self-driving white minivans has roamed the streets of Chandler, Arizona, picking up hundreds of paying riders. Like standard issue taxis, these cars pick up passengers and ferry them to a destination of their choosing. But unlike taxis or other rideshare services, a person isn’t doing the actual driving. This is Waymo One, and it’s asking us to do something we haven’t done before: Trust a driverless vehicle to deliver us safely from Point A to Point B.

In smaller ways, we’ve already begun to shift responsibility to the AI-assisted devices in our lives. We rely on Google Maps to navigate through our days, ask our virtual assistants if it’s going to rain later, and have probably seen, if not driven, one of the many new cars equipped with partial self-driving features. But relinquishing full physical control to a non-sentient driver requires a new, heightened level of trust.

“When I think back to my first ride, so much was running through my head,” says Waymo’s Ryan Powell. “What will it be like? What can the car really ‘see’? How does it ‘think’?” Powell leads UX research and design at the self-driving technology company (formerly part of Google, Waymo is now an Alphabet subsidiary), making it his job to think about how exactly these driverless cars can engender trust among riders. The car’s lidar (light detection and ranging), cameras, and sensors capture a mind-boggling amount of data on surrounding roads and traffic conditions, but the passenger doesn’t*—*and probably doesn’t want to—see everything the car sees. Instead, a series of interfaces translate that data into tidy visuals and updates for the rider that better match how we process the world, reassuring passengers that the car is making safe, sound decisions. Taken together, it’s a user journey that Powell says had to be built from scratch. “There’s no playbook for self-driving cars,” he says.

We sat down with Powell ahead of his talk at SXSW to learn more about that from-scratch Waymo playbook, and all the nitty-gritty work that goes into designing technology that can earn—and keep—our trust on the road.

Redesigning the ride

Waymo’s app uses carefully marked blue shading to indicate where cars can legally pull over for pickups.

We’ve had to rethink all the micro-interactions that happen between riders and human drivers, starting with hailing the ride itself. With pickups, there’s a twist here that’s unique to us: Unlike humans, our cars are programmed to adhere to traffic laws, so they won’t just stop anywhere, and you can’t just signal to them. We tried a bunch of concepts and we came to appreciate more and more that map-based UIs get really complex, because there’s so much other information. We leverage Google Maps in our app, and some of our early ideas were misunderstood as signifying traffic, instead of where the car can pull up. In the app, we use blue-shaded regions to indicate where our cars can safely pull over to get you—it’s the solution that early riders understood most clearly.

That said, this all works a little differently if you’re vision-impaired: We’re experimenting with a feature that lets you honk the horn from the app, as a wayfinding mechanism for the car. More advanced audible messaging is also available: When you set up the app for the first time, you have the option of turning on certain features, one of which includes more voice feedback. So when you get into the car, for example, it can let you know where the ‘start ride’ button is, or when you’re crossing through a certain intersection, so that you can get a sense of where you are.

In lieu of human drivers, screens confirm that you’ve found the right car.

When you hop into one of our cars, one of the first things you’ll notice is our passenger screen. It’s there to reassure you that you’re in the right car, and that the car knows where you’re going. The second thing you might notice is the sound. We play a relaxing, ambient track as you’re settling in. Waymo’s soundscape is composed in the musical key of E, which, according to our sound designer, represents the expression of “joy, magnificence, and the highest brilliancy.”

An ambient soundtrack, in the key of E, plays until you push the ‘start ride’ button in the car or on your app.

Showing less to communicate more

When you’re in a car driven by a human, there’s a lot of communication that happens between you and the driver, whether you’re asking, “What route are you taking?” or just observing the driver watching a person in a crosswalk.

Waymo cars capture a mind-boggling amount of data on surrounding objects and traffic patterns. Passengers don’t see all of that, though—instead they see tidy visuals and clearly marked journey updates (simplified data graphic, above).

We had to find a way to show you that our cars are also aware. Psychologically, riders elevate other people above everything else—they’re the most important thing for the car to pay attention to. Rendering them realistically matches that mentality. On the passenger screen you’ll see the road and crosswalks, with the car’s trajectory as a green path. We took a subdued approach to color; when we tested a more saturated palette, the screens felt like they’re in your face. Night mode is available to make the screens dimmer, but as a rule we didn’t want people accosted with sound and a bright screen.

Nearby vehicles appear as simple shapes, but cyclists and pedestrians have more definition, with blue and white underlines respectively—this shows that our vehicle can distinguish between the two. We use the actual laser points from the car’s lidar to fully render them instead of using symbols, because we discovered that our riders appreciate seeing the arms and legs of pedestrians and cyclists moving in realtime. If a pedestrian is in a wheelchair or on crutches, for example, that information shows up on screen too.

Sometimes it works the other way: In earlier versions of our UI, we tried to do something similar by using laser points to show nearby traffic. But because laser points only reflect off the vehicle surfaces that face the car, they looked half-rendered—almost like an error. We were concerned that this might cause riders to doubt just how well our technology understood its surroundings, so now we use the flat blue rectangular shapes to represent nearby vehicles.

On the passenger screen, other vehicles are rendered in simple, flat blue rectangles, while white spotlights illuminate pedestrians (top inset, above). Objects like pedestrians, cyclists, and even traffic cones are rendered realistically to reassure riders that the car sees them (bottom inset).

Every once in a while, you might glance up and wonder where you are. So in addition to showing what the car can see, we also show surrounding buildings. But because they’re less important, we dim them down so they don’t compete with cars and pedestrians in your view.

Getting you there safe and sound

Throughout the ride, you’ll see on-screen messages in an overlay we call a “status layer.” That’s where we get really clear about things like traffic signals, stop signs, railroad crossings, speed limits, school zones, and trip progress. It’s where we directly communicate what the car is doing and anticipate questions you might ask. We use the lower portion of the UI to provide messages that let you know, for example, exactly why the car has stopped. Or when the car is assessing a situation before making a turn at an intersection, we’ll let you know that we’re waiting for pedestrians to clear. Remember, the car can see in 360 degrees, but that’s not always true for the passenger in the back seat.

A “status layer” conveys Waymo’s decision-making and provides rider updates.

If we need your attention, our audio will chime in. But we use sound very judiciously, because we’re going for a “lean back experience.” It’s only for moments when we know we should explain what’s going on, like if the car sees something not visible to the passenger and brakes suddenly. For the highest level of urgency, less-relaxing sounds let you know that an action is required, like if the door is still open. And we always pair sound with an on-screen notification, just to be clear and transparent.

A minute from your destination, we use voice prompts to make sure that you don’t forget your things. That sounds boring, but since there’s not a person there to remind you, we’ve found that it really does help. The last thing you hear is a gentle cascading sound, denoting arrival at your destination and a return to the home key of E. It’s meant to feel consistent with the beginning of the journey.

A lot of our design process was spent trying to understand how people react to our technology, especially when they’re the only person in the car. What are the “look up moments” so to speak, when a rider becomes curious about what’s going on in and around the car? There were definitely more than I thought there’d be, and each of those scenarios came with its own complexities. How much information will people want to know, and how much do we need to tell them proactively?

We’re on the lookout for patterns in the feedback we get from our riders. People tend to trust our technology pretty quickly, but Waymo’s cars still drive differently than a person drives. Our cars can be cautious, and sometimes that gets misinterpreted. But there’s a remarkable amount of computing at play that’s making decisions. People don’t always understand a new technology until they experience it, so it’s on us to help them get there.