>
Yale Just Proved COVID Vaccine Injury Exists and Spike Production Persists for Years...
Time To Kash-Out the Rogue FBI
BREAKING: The Original Confirmed Creators Of COVID-19 – The Wuhan Institute – Is Now Warning...
Microsoft Majorana 1 Chip Has 8 Qubits Right Now with a Roadmap to 1 Million Raw Qubits
The car that lets you FLY over traffic jams! Futuristic £235,000 vehicle takes flight...
Floating nuclear power plants to be mass produced for US coastline
The $132 "Dumfume" LiFePO4 Battery Tested! Holy cow...
Virginia's Game-Changing Nuclear Fusion Plant Set To Deliver Clean Energy And Disrupt The Fossil
How This Woman Turned Arizona's Desert into a Farmland Oasis
3D-printed 'hydrogels' could be future space radiation shields for astronaut trips to Mars
xAI Releases Grok 3 in About 44 Hours
Flying Car vs. eVTOL: Which Is the Best New Kind of Aircraft?
NASA and General Atomics test nuclear fuel for future moon and Mars missions
In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.
A blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.
"Alarmingly," writes Lee, "it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary."
Hack used a mix of AI technologies and a focus on keywords
By combining large language models (LLM), speech-to-text, text-to-speech and voice cloning tactics, X-Force was able to dynamically modify the context and content of a live phone conversation. The method eschewed the use of generative AI to create a whole fake voice and focused instead on replacing keywords in context – for example, masking a spoken real bank account number with an AI-generated one. Tactics can be deployed through a number of vectors, such as malware or compromised VOIP services. A three second audio sample is enough to create a convincing voice clone, and the LLM takes care of parsing and semantics.