Buzz surrounding Apple’s Tuesday event has never been higher, as consumers eagerly await the announcement of the next generation iPhone. But the new hardware could take a back seat to a bigger announcement: a potential voice control software feature that could be released with the latest version of iOS 5.
The voice control feature — referred to by Apple pundits and bloggers as “Assistant” — could change the way people interact with their iPhones, using conversation with an artificially intelligent assistant to help make decisions and schedule daily activities.
“This is an area in which Apple has been trailing Google and is playing catch-up,” Forrester analyst Charles Golvin said in an interview. “Similar to the notifications improvements [in the iOS 5 beta] and the ability to use the volume control button as a camera shutter.”
This type of service has been a long time coming for Apple. Former Apple CEO John Sculley first described such a user experience feature in his 1987 book Odyssey. He called the concept the “Knowledge Navigator,” and Apple subsequently released several video demos over the next several years illustrating how the idea would work. The Knowledge Navigator concept takes place on a tablet-computer (decades before Apple unveiled the iPad), incorporating advanced text-to-speech functionality, a powerful speech comprehension system and a multi-touch gesture interface much like that which is used in iOS.
Back in the late ’80s, Scully’s lofty visions of the future were the stuff of dreams. Today, we’re much closer to this becoming a reality. We’ve got intuitive, portable touchscreen devices that house powerful processors with enough memory to handle such impressive tasks.
To boot, we’ve got chips and software that can back up the processes required for complex speech analysis.
Apple had the hardware portions of its Knowledge Navigator concept pretty much nailed down with the latest iterations of the iPhone and iPad, but lacked the text-to-speech and speech-understanding chops. Until a start-up named Siri came along.
Siri began as a voice recognition app for the iPhone. The app sounds similar to Google’s voice search, which is integrated into Google Search on iOS and is a standalone app on Android and other platforms. With Siri, instead of just searching for a specific topic, place or person using your voice, you’re giving more descriptive instructions. One command, for example, may be “Find the nearest good Chinese food restaurant.” At launch, Siri was integrated with about 20 different web information services, so rather than just taking you to the search results page for “good Chinese food restaurants,” it would bring up Yelp results for the highest-ranking restaurants near your GPS-determined location.
But it’s much more than just a digital Zagat’s. Siri calls itself a “Virtual Personal Assistant.” Rather than just issuing the app commands or Google-style search phrases, you interact with it through conversation. Saying something like “I’d like a table for six at Flour and Water” would prompt the app to make a reservation using OpenTable. And if you haven’t provided enough information for it to complete a task, it will prompt you to elaborate. Siri then uses information about your personal preferences and interaction history so it can better accomplish specific tasks. As you use it more, it learns your preferences and improves its performance.
Authors: