App developers: Should you voice-enable your app?

Nearly a third of people regularly use speech to do mobile searches. Is it time for app developers to take voice seriously as a user interface?

It’s time to ask:

Alexa, what’s coming after the smartphone?

For the past ten years, people have been wondering how long the smartphone will remain the dominant form factor for personal communication.

Yes, mobile devices have indeed become faster, lighter, and just generally better at doing everything (photos, music, paying for stuff, etc.). But smartphones are now so good that consumers are delaying buying new ones. According to Strategy Analytics, the smartphone replacement cycle for US owners is now 33 months.

The cycle is a problem for the industry. A new platform would give every stakeholder in the market a real boost. Could it be based around voice?

It’s possible. One of the more surprising tech developments of the past few years has been the rise of speech as an interface. It’s happened gradually, but people are becoming more used to talking to devices via Siri, Google Voice, Cortana, Bixby, and others.

According to the Global Web Index 2018 Insight Report, 27 percent of the global online population now uses voice search on a mobile device. It has helped to fuel the rise of dedicated assistants such as Alexa and Echo. It has also helped ‘hearables’ such as Apple AirPods to move into the mainstream. According to Statista, around 3.25 billion digital voice assistants are being used in devices around the world. It suggests that by 2023, the number could be eight billion.

Now, the big digital giants want to drive this market forward even more.

For example, Amazon has been trying to make Alexa a platform, rather than a device. In early 2019, it revealed that 4,500 different manufacturers had made more than 28,000 Alexa-compatible devices. Amazon has even previewed Alexa-based eyewear. Its Echo Frames (available by invitation only) pair with any Android phone to read out notifications, make phone calls, and play audio.

So, this leaves a huge question. If voice is the next big interface, what should you as an app developer do?

Well, one obvious option is to voice-enable your existing app. Google makes the process pretty easy for Android via Google Voice Actions. It offers the Google Voice Actions API to support all Android phones, tablets, and wearables.

Google Voice Actions let users complete tasks in your app using voice commands. They can say ‘OK Googlefollowed by a command such as ‘take a note.’ Their words will wake up your app, which will perform the action for them. It will help users get to your native mobile experience more quickly than through tapping icons and menus on the touch screen. You can also configure your app to work with a voice-activated smart speaker such as Google Home or Amazon Echo. Hundreds of apps are now linked this way. Everything from Spotify to Buzzfeed to Domino’s.

Of course, enabling voice will not apply or make sense to all apps. But if yours is the kind that might benefit from integration with a smart indoor speaker, then it’s worth considering. Google makes it pretty straightforward to create a dedicated ‘Action’ for a Google Home device, including pages describing how to get started making a Skill for Amazon Echo.

So, it is far too early to write off the smartphone. People will always want screens. How else will they watch screaming goats on YouTube?

However, there is little doubt that many specific actions can be done far more quickly and efficiently using some combination of wearable and voice. As the conversation about voice grows, make sure you are listening.