Now Alexa may speak as your loved ones: Amazon’s new pitch.
At Amazon’s Re: Mars conference, Alexa’s senior vice-president Rohit Prasad exhibited a startling new voice assistant capability: the supposed ability to mimic voices.
So far, there’s no timeline whatsoever as to when or if this feature will be released to the public.
Stranger still, Amazon framed this copycatting ability as a way to commemorate lost loved ones.
It played a demonstration video in which Alexa read to a child in the voice of his recently deceased grandmother.
Prasad stressed that the company sought ways to make AI as personal as possible. “While AI can’t eliminate that pain of loss, he said, “it can make the memories last.”
An Amazon spokesperson told Engadget that the new skill could create a synthetic voiceprint after being trained on as little as a minute of audio of the individual it’s supposed to be replicating.
Security experts have long-held concerns that deep fake audio tools, which use text-to-speech technology to create synthetic voices, would pave the way for a flood of new scams.
Voice cloning software has enabled several crimes, such as a 2020 incident in the United Arab Emirates where fraudsters fooled a bank manager into transferring $35 million after impersonating a company director.
But serious fake audio crimes are still relatively unusual, and the tools available to scammers are, for now, relatively primitive.
Now Alexa may speak as your loved ones
Prasad noted that the company can do this sort of audio output with merely a minute of speech before continuing: “The way we made it happen is by framing the problem as a voice conversion task and not a speech generation path.”
There aren’t many more details beyond this initial demonstration.
Reuters reports that Prahad mentions the goal of this technology is to “make memories last” after “so many of us have lost someone we love,” which makes it seem rather intense.
It does bring to mind the Takara Tomy smart speaker we reported on a few months back, which could imitate a parent’s voice at bedtime so they could read to their kids even when physically unavailable.
It’s a neat idea for parents who often travel for work or are just sick of reading the same book.
But it raises the question of if a deep fake copy of someone’s voice can offer the same comfort and security as a warm-blooded person in the room.
To that end, this kind of technology also raises other questions, like what kind of emotions would hearing the voice of a loved one lost to COVID or some other incurable disease evoke in a person? Wouldn’t it just make them angrier their person is gone? And how does this interaction with the deceased work with someone actively working through grief? Does it interrupt the process or help it along? Further, what would the deceased think of it?
There’s, of course, also always the security risk of deep fake impersonation, although it’s unclear how the feature’s voice samples will be stored.
Amazon’s re:MARS conference is meant to highlight the company’s ventures in ambient computing, including Alexa’s functionality.
The “MARS” stands for machine learning, automation, robots, and space.
It’s expected that Amazon will be sharing more news bits at its Las Vegas conference over the next few days.
Follow us or bookmark us for more Celeb Bio box office collection report celebrities trailers and promos
Join us on Facebook
Join us on Twitter