Siri has a new shape. At its WWDC on Monday, Apple announced several new product and software updates, as well as a continued focus on artificial intelligence (AI) and machine learning. One of Apple’s new products is HomePod – Apple’s take on the home hub/virtual assistant market currently served by leaders such as Amazon’s Echo and Google’s Google Home. Ultimately, device manufacturers are racing to become the dominant platform not only for your phone, but also in your home, car, and on your body as well. It was only a matter of time before Apple made more concrete moves into home hardware to address the broader consumer ecosystem that exists for virtual assistants and voice technology.
In October, I covered the release of the Google Home and Google Pixel – the company’s flagship smartphone – describing the announcement as the beginning of an AI “arms race” in which device manufacturers aim to make voice a dominant means of interacting with technology. I wrote recently about this topic further, describing some of the current tasks AI is optimized to perform, as well as where I see it headed (hint: voice is the new text).
Apple already had a presence in the home through HomeKit, which allows users to control smart appliances such as thermostats, lightbulbs, and security systems through their iPhone as a central hub. HomePod will now serve as a hub for HomeKit connected products, similar to how the Amazon Echo, for example, can be used to turn lights on and off, or adjust the heat. Other than serving as a smart home hub, HomePod can be used to play music (exclusively Apple Music, which may be a deal breaker to some), perform basic tasks such as setting a timer or providing weather, traffic, or news information, and answer questions through calculations or an Internet search.
For anyone who has been following the smart home/virtual assistant markets, Apple is a fairly late entrant with HomePod. But with the company’s range of products, from smartphones to smartwatches to HomeKit, and its investments in both artificial intelligence and machine learning, creating a home hub product makes logical sense, especially if that product makes use of Apple’s existing Siri voice interface. Siri has been a part of the iPhone since 2011, and Apple recently added Siri to its Mac line of computers as well.
Apple is positioning HomePod as primarily a speaker for playing music, differentiating it from virtual assistants like the Amazon Echo that offers slightly weaker hardware specs, and from speakers that do not allow for voice interaction or perform a wider range of tasks. Rather than for music consumption, the more likely reason someone would purchase the HomePod over the Echo or the Google Home is entrenchment in the Apple ecosystem. This brand loyalty is creating very distinct offerings in the smart home and virtual assistant markets that have limited compatibility with one another. For example, Apple wants you to use its music service, rather than Amazon’s or Google’s, with the HomePod. Once you buy into one brand, you’re pretty much stuck.
It’s no surprise that Apple made moves into the connected home and broadened Siri’s device reach. (It’s actually more of a surprise that Apple was so late in doing so.) What will be interesting to watch is both third-party and native integration being built out across all these platforms. Apple has an opportunity to create very seamless and intuitive experiences that transfer between devices (from phone, to watch, to speaker, to computer) and enable more meaningful technology interactions. That is one of the main sources of value in voice as a means of interacting with technology, as it can potentially provide an easier, more intuitive way to use a device. If Apple can open Siri up to additional third party apps through APIs, the value of the HomePod greatly increases. The strength of Apple’s device ecosystem gives the company a clear opportunity to focus on providing a cohesive, unified experience in which users can switch between devices seamlessly and maintain a continued interaction.
Right now, the tasks that Siri is equipped to perform are so narrow and unambiguous that the time savings from using voice versus manually interacting with the device are slim. Interestingly enough, Amazon just released the Echo Show, a digital assistant device that also has a touch screen. In my opinion, the inclusion of the touchscreen demonstrates how far voice interaction needs to come before it will be the dominant means of using a device, and really highlights the shortcomings that exist with current iterations of the technology. For most tasks, it’s just easier to use a screen.
In covering this market, my view is that Apple’s announcement is nothing revolutionary. But it does demonstrate the continued importance companies are placing on creating an ecosystem for their devices, and forging new touchpoints in a consumer’s life through availability in cars, homes, and on-person. I wrote back in December that voice will be the new text, and I still believe it. But it’s clear we are still some ways away from that. For now, I’m content to ask Siri to set a timer or play The Shins, while imagining the exciting new kinds of voice interactions we will be having in the near future.