June 30, 2022

Amazon’s Alexa could soon mimic voice of dead relatives to give technology more ‘human attributes’

LAS VEGAS — Amazon’s Alexa would possibly soon replicate the voice of members of the family – even when they’re dead.

The potential, unveiled at Amazon’s Re:Mars convention in Las Vegas, is in growth and would enable the digital assistant to mimic the voice of a selected particular person based mostly on a lower than a minute of supplied recording.

Rohit Prasad, senior vice chairman and head scientist for Alexa, mentioned on the occasion Wednesday that the need behind the characteristic was to construct higher belief within the interactions customers have with Alexa by placing more “human attributes of empathy and affect.”

“These attributes have become even more important during the ongoing pandemic when so many of us have lost ones that we love,” Prasad mentioned. “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”

SEE ALSO: Amazon Alexa inspired 10-year-old to attempt harmful ‘outlet’ TikTok problem, mother says

In a video performed by Amazon on the occasion, a younger youngster asks “Alexa, can Grandma finish reading me the Wizard of Oz?” Alexa then acknowledges the request, and switches to one other voice mimicking the kid’s grandmother. The voice assistant then continues to learn the e-book in that very same voice.

To create the characteristic, Prasad mentioned the corporate had to learn the way to make a “high-quality voice” with a shorter recording, opposed to hours of recording in a studio. Amazon didn’t present additional particulars in regards to the characteristic, which is certain to spark more privateness considerations and moral questions on consent.

Amazon’s push comes as competitor Microsoft earlier this week mentioned it was scaling again its artificial voice choices and setting stricter tips to “ensure the active participation of the speaker” whose voice is recreated. Microsoft mentioned Tuesday it’s limiting which clients get to use the service — whereas additionally persevering with to spotlight acceptable makes use of resembling an interactive Bugs Bunny character at AT&T shops.

“This technology has exciting potential in education, accessibility, and entertainment, and yet it is also easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners,” mentioned a weblog submit from Natasha Crampton, who heads Microsoft’s AI ethics division.