Follow @EthHackingNews |
A recent report by Consumer Reports has found that four out of six companies offering AI voice cloning software fail to provide meaningful safeguards against the misuse of their products. This raises significant concerns about data security, potential scams, and impersonation attempts.
In a recent report released by Consumer Reports, it has been revealed that four out of six companies offering AI voice cloning software fail to provide meaningful safeguards against the misuse of their products. This lack of regulation has sparked concerns among experts and consumers alike, who fear that the technology could be used for deceptive purposes such as impersonation scams.
The report evaluated the AI voice cloning services from six companies: Descript, ElevenLabs, Lovo, PlayHT, Resemble AI, and Speechify. It found that ElevenLabs, Speechify, PlayHT, and Lovo "required only that researchers check a box confirming that they had the legal right to clone the voice or make a similar self-attestation." This lack of robust safeguards means that even if a user gives permission for their voice to be cloned, it does not necessarily mean that the company will prevent unauthorized use.
To establish an account with these companies, users were only required to provide a name and email address. This raises significant concerns about data security and the potential for misuse. The lack of robust safeguards also means that these companies are open to criticism from experts who argue that their business practices run afoul of existing consumer protection laws.
One such expert is Grace Gedye, a policy analyst at Consumer Reports and author of the AI voice cloning report. She acknowledges that open source voice cloning software complicates matters, but believes that even so, it's worthwhile to try to encourage American companies to do a better job protecting consumers. "I actually think there's a good argument that can be made that what some of these companies are offering runs afoul of existing consumer protection laws," she said.
The concerns raised by Gedye's report are not limited to the companies involved, but also extend to the broader use of AI voice cloning software. Speech synthesis has been the focus of research for decades, but only recently, thanks to advances in machine learning, has voice cloning become convincing, easy to use, and widely accessible. While the software has a variety of legitimate uses such as generating narration for audio books, enabling speech from those unable to speak, and customer support, it can also be easily misused.
The misuse of AI voice cloning software is not limited to personal or private conversations. There have been numerous reports of impersonation scams using this technology, where the scammer dials the victim's phone number and uses the cloned voice to deceive them into revealing sensitive information or sending money. These scams can have serious consequences for the victims, who may lose their savings or identity.
In one notable case, police in Baltimore, Maryland, arrested the former athletic director of a high school for allegedly impersonating the school's principal using voice cloning software to make it sound as if the principal had made racist, antisemitic remarks. This highlights the potential dangers of AI voice cloning software and the need for more robust safeguards.
In response to these concerns, some large commercial AI vendors have taken steps to mitigate the risks associated with their products. Microsoft, for example, has chosen not to publicly release its VALL-E 2 project, citing the risk of potential misuses "such as spoofing voice identification or impersonating a specific speaker." Similarly, OpenAI has limited access to its Voice Engine for speech synthesis.
The US Federal Trade Commission (FTC) last year finalized a rule that prohibits AI impersonation of governments and businesses. It subsequently proposed to extend that ban to prohibit the impersonation of individuals, but no further progress appears to have been made toward that end.
Given the current regulatory environment in the United States, where the Consumer Financial Protection Bureau is under threat from efforts to eliminate it, many experts believe that state-level regulation might be more likely than further federal intervention. "We've seen a lot of interest from states on working on the issue of AI specifically," said Gedye. "Most of what I do is work on state-level AI policy and there's a bunch of ambitious legislators who want to work on this issue."
However, it remains to be seen whether these efforts will lead to meaningful changes in the way companies handle AI voice cloning software. In the meantime, consumers remain vulnerable to scams and impersonation attempts.
In conclusion, the report by Consumer Reports highlights the need for stronger safeguards against AI voice cloning software. While some large commercial vendors have taken steps to mitigate the risks associated with their products, more needs to be done to protect consumers from potential misuse. The lack of regulation in this area is a concern that warrants closer attention and action.
Related Information:
Follow @EthHackingNews |