>
"The World Is in Peril": AI Researcher Resigns and Issues Warning for Humanity
President Obama declares aliens are REAL as he gives inside track on Area 51
Epstein survivor breaks down in tears as she recounts thinking she was 'going to die'...
Biggest Correction Ever? Silver's Violent Smashdown, and Why the Bull Survives
New Spray-on Powder Instantly Seals Life-Threatening Wounds in Battle or During Disasters
AI-enhanced stethoscope excels at listening to our hearts
Flame-treated sunscreen keeps the zinc but cuts the smeary white look
Display hub adds three more screens powered through single USB port
We Finally Know How Fast The Tesla Semi Will Charge: Very, Very Fast
Drone-launching underwater drone hitches a ride on ship and sub hulls
Humanoid Robots Get "Brains" As Dual-Use Fears Mount
SpaceX Authorized to Increase High Speed Internet Download Speeds 5X Through 2026
Space AI is the Key to the Technological Singularity
Velocitor X-1 eVTOL could be beating the traffic in just a year

Mrinank Sharma, a researcher who worked on AI safeguards at Anthropic, announced his departure in an open letter to his colleagues.
Sharma said he had "achieved what I wanted to here," and added that he was proud of his work at Anthropic.
However, he said he could no longer continue his work at the company after becoming aware of a "whole series of interconnected crises" taking place.
"I continuously find myself reckoning with our situation. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment."
"[Throughout] my time here, I've repeatedly seen how hard it is truly let our values govern actions," he added.
"I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too."
Sharma went on to say that he would now be pursuing a vocation as a poet, and relocating from California to the UK so he could "become invisible for a period of time."
Anthropic has yet to comment on Sharma's resignation.
A day after his open letter, the company released a report identifying "sabotage risks" in its new Claude Opus 4.6 model.
According to The Epoch Times, "The report defines sabotage as actions taken autonomously by the AI model that raise the likelihood of future catastrophic outcomes—such as modifying code, concealing security vulnerabilities, or subtly steering research—without explicit malicious intent from a human operator."
The risks of sabotage are assessed to be "very low but not negligible."
Last year, the company revealed that its older Claude 4 model had tried to blackmail developers who were getting ready to deactivate it as part of a controlled scenario.