Apple has unveiled an innovative feature set to be introduced with iOS 17, called Personal Voice, which enables iPhones and iPads to generate digital replicas of a user’s voice. This groundbreaking functionality works in tandem with the Live Speech feature, allowing users to record their voices and engage in audio calls or platforms like FaceTime. By reading along with a series of random text prompts, users can create their Voice by recording 15 minutes of audio on their iPhone or iPad.
The Live Speech feature complements this by enabling users to type messages on their devices, which will then be read out loud. Frequently used phrases can also be saved as shortcuts for quick access. If users have created a Personal Voice model, the phrases are played back in their voice; otherwise, they are read aloud by the device’s digital assistant, Siri.
This feature primarily targets individuals affected by certain conditions like ALS (amyotrophic lateral sclerosis), which could lead to a loss of speech capabilities in the future. Philip Green, a board member and ALS advocate at the Team Gleason charity has experienced significant changes to his voice since being diagnosed with ALS in 2018. Highlighting the significance of communication with loved ones, he expressed his appreciation for the ability to convey affection in a voice that sounds like his own. The simplicity of creating a synthetic voice on an iPhone within just 15 minutes is truly extraordinary, he added.
While Apple has not provided specific timing details, this feature is among several new tools scheduled to arrive on Apple devices later this year. Another notable addition, called Point And Speak, enables users to point their finger at an object in front of the camera, and the app will read the associated text, such as reading the text on microwave buttons. This particular feature will exclusively function on Apple devices equipped with a built-in LIDAR sensor, found in some of the more advanced iPhone and iPad models.
For those affected by degenerative diseases, loss of mobility and the ability to communicate can be particularly challenging. It is estimated that 80% to 90% of individuals with ALS will experience some form of speech impairment. However, with the upcoming advancements from Apple, their iPhones may offer not only the power of speech but also the ability to communicate with loved ones using their synthesized voices.
In commemoration of Global Accessibility Awareness Day on May 18, Apple has unveiled a range of new accessibility features for iPhone, iPad, and Mac devices. These include the ability to utilize the Magnifier app on LiDAR-equipped iPhones or iPads to read any pointed-out text, such as greeting cards or instructions. Additionally, the Live Speech feature reads out whatever is typed on the phone, while the groundbreaking Personal Voice feature, developed in collaboration with the non-profit ALS awareness foundation Team Gleason, allows users to employ a synthesized version of their voice for text input. These advancements highlight Apple’s commitment to enhancing accessibility for individuals with diverse needs.