>
How much more do we need to see to know that this is not normal??
YouTube Will Not Monetize Content Featuring AI-Generated Videos from July 15, 2025
Author Nick Bryant Reveals Why the Government Threw the Diddy Trial and Its Links to Epstein...
I Just Spoke With a Top FBI Source | What They Told Me About Bongino Changes...
Magic mushrooms may hold the secret to longevity: Psilocybin extends lifespan by 57%...
Unitree G1 vs Boston Dynamics Atlas vs Optimus Gen 2 Robot– Who Wins?
LFP Battery Fire Safety: What You NEED to Know
Final Summer Solar Panel Test: Bifacial Optimization. Save Money w/ These Results!
MEDICAL MIRACLE IN JAPAN: Paralyzed Man Stands Again After Revolutionary Stem Cell Treatment!
Insulator Becomes Conducting Semiconductor And Could Make Superelastic Silicone Solar Panels
Slate Truck's Under $20,000 Price Tag Just Became A Political Casualty
Wisdom Teeth Contain Unique Stem Cell That Can Form Cartilage, Neurons, and Heart Tissue
Hay fever breakthrough: 'Molecular shield' blocks allergy trigger at the site
In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.
A blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.
"Alarmingly," writes Lee, "it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary."
Hack used a mix of AI technologies and a focus on keywords
By combining large language models (LLM), speech-to-text, text-to-speech and voice cloning tactics, X-Force was able to dynamically modify the context and content of a live phone conversation. The method eschewed the use of generative AI to create a whole fake voice and focused instead on replacing keywords in context – for example, masking a spoken real bank account number with an AI-generated one. Tactics can be deployed through a number of vectors, such as malware or compromised VOIP services. A three second audio sample is enough to create a convincing voice clone, and the LLM takes care of parsing and semantics.