SXSW 2018: The Future of AI Assistants

SXSW 2018: The Future of AI Assistants

Alexa, Google Home, Siri, and Cortana will learn to adjust to your changing life

Photo: Stephen Barnes/Technology/Alamy

p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Arial; color: #323248; background-color: #ffffff}
span.s1 {font-kerning: none}

In the years to come, what will be the biggest improvement in AI-powered digital assistants? It’s likely to be the ability to accommodate a fundamental aspect of being human: The fact that we all have different personas, we show different facets of ourselves depending on where we are and who we are with, and our personas change over time. And different personas want different things from their AI assistants. Assistants that can understand your personal circumstances are less likely to remind you to pick up your rash prescription as you drive by the pharmacy if there are other people in the car, bug you about work email at home, or keep suggesting fun nightclubs if you’ve just had a baby.

That was the message from Sunday’s panel on “Designing the Next Wave of Natural Language and AI” at the SXSW festival in Austin, Texas. The panel included Ben Brown from Google; Ed Doran from Microsoft; Karen Giefer from Frog; and Andrew Hill from Mercedes-Benz.

The promise of an AI assistant is that it can learn to anticipate your needs and wants without you having to explicitly program in information about yourself. However, designers have already seen some unanticipated problems with the first generation of these assistants. For example, while testing a system designed to automatically learn where the user lived and so suggest better commuting routes, “for one of my lead developers, it figured that his home was a bar, because he was there at midnight every day and didn’t move around much,” said Doran. Consequently, Doran says, it’s important for future systems to be “up front about what [the system] has learned, and let the customer decide if that’s right—and even more to the point, is it still right?”

An even tougher challenge will be dealing with the fact that what’s right can change depending on the circumstances. “With interaction with humans, the context is really important,” said Hill. Users could flag their context for the system—telling it, for example, that they are taking a vacation day—but Hill thinks that improving such context switching will be a focus for AI research and development, saying: “How much [context awareness] can be learned automatically by the system itself, so you get a much more natural interaction?”

As AIs and natural language systems become more pervasive, it will also be increasingly important for system creators to ask “have I designed the AI in an inclusive manner?” said Doran. He added: “Do I have a big enough training data set, representing a much larger group of people? Have I done a good enough job at looking at the transparency of my AI, so that it’s making ethical, trustworthy decisions?”

For Giefer, the increased availability of AI tools to a wider audience heralds positive improvements in the technology in the years to come. “Once you open something up beyond a core industry or discipline, things really start moving quickly,” she said.

Google’s Brown believes one result of this democratization of the technology will be an explosion in unexpected uses of AI. He’s particularly interested in the role of artistic uses as a driver of unexpected innovation, saying “Often, those creative executions cause us to push the edges of our technology, and they teach us a lot about things we didn’t know, such as objection recognition, how we look at images, how we deal with language.”

Source: IEEE Spectrum Robotics