Apple is researching ways to improve Siri to better understand people who stutter, according to a new report.
The Wall Street Journal has a new report published, which contains details on how companies train voice assistants to deal with atypical speech. According to the article, Apple has a database of 28,000 audio clips from podcasts featuring people who stutter that are used to train Siri. The data Apple has collected is intended to improve the speech recognition systems for atypical speech patterns. An Apple spokesperson confirmed this to the news site.
Apple wants to publish further details
In addition to improving the way Siri understands people with atypical speech patterns, Apple has also implemented a "Hold to Talk" feature for Siri, which allows users to control how long Siri should listen. This prevents Siri from interrupting users with a stutter before they finish speaking. The smart voice assistant can also be used without voice input at all, through a "Type to Siri" feature first introduced in iOS 11. Apple plans to document the work to improve Siri in a study to be released this week that will provide more details on the company's efforts, according to the Wall Street Journal.
Google Assistant & Alexa should also improve
Google and Amazon are also working on training Google Assistant and Alexa to better understand all users, including those who have problems with their voice. Google is collecting atypical speech data to do this, the report shows. Amazon itself launched the Alexa Fund in December so that people with speech impairments can train an algorithm that recognizes their unique voice patterns. (Photo by DedMityay / Bigstockphoto)