Adapting to us
A new interaction language that considers personal, social, and spatial awareness is just the beginning. As technology progresses, the ultimate goal is devices that truly understand us, so we don’t need to spend precious moments trying to explain ourselves. This means less time managing the technology in your life and the distractions it brings. Take something as simple as silencing an alarm. As soon as you wake up, you have to reach for the phone, pick it up, bring it to your face and find a little button to press. Once you’re there, you see a push notification with news that you can’t ignore or maybe you hop on Twitter, and soon you’re down the rabbit hole of your digital life—before you’ve even gotten out of bed. Imagine a different scenario where you’re far away from your device and interface elements appear larger automatically, shrinking as you approach; your voice assistant provides more information without prompting, because it understands you can’t see the screen. In contrast, new patterns could mean an end to these small but time-consuming microtasks, like switching your phone on and off, that keep you in the digital world longer than you intended. And it would free us up to spend more time connecting with other people, and being present in the physical world.
Eventually, there might even be real-time learning for gestures, so the machine can adapt and relate to you, as a person, specifically. What once felt robotic, will take on new meaning. More importantly, the next generation of Soli could embrace the beauty and diversity of natural human movement, so each of us could create our own way of moving through the world with technology—just as we do in everyday life.