>
Warning Signs of an Impending Apocalypse to Watch For (New Update)
The Disturbing Celebrity Priest Phenomenon
From The Moral Majority To A Moral Meltdown
When East and West can't meet: Between Leviathan, Behemoth and Mandala
New AI data centers will use the same electricity as 2 million homes
Is All of This Self-Monitoring Making Us Paranoid?
Cavorite X7 makes history with first fan-in-wing transition flight
Laser-powered fusion experiment more than doubles its power output
Watch: Jetson's One Aircraft Just Competed in the First eVTOL Race
Cab-less truck glider leaps autonomously between road and rail
Can Tesla DOJO Chips Pass Nvidia GPUs?
Iron-fortified lumber could be a greener alternative to steel beams
One man, 856 venom hits, and the path to a universal snakebite cure
Dr. McCullough reveals cancer-fighting drug Big Pharma hopes you never hear about…
In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.
A blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.
"Alarmingly," writes Lee, "it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary."
Hack used a mix of AI technologies and a focus on keywords
By combining large language models (LLM), speech-to-text, text-to-speech and voice cloning tactics, X-Force was able to dynamically modify the context and content of a live phone conversation. The method eschewed the use of generative AI to create a whole fake voice and focused instead on replacing keywords in context – for example, masking a spoken real bank account number with an AI-generated one. Tactics can be deployed through a number of vectors, such as malware or compromised VOIP services. A three second audio sample is enough to create a convincing voice clone, and the LLM takes care of parsing and semantics.