>
BREAKING: CBS 60 Minutes: revealed a previously unknown weapon that they believe is linked...
The Year of Adam Smith: Why the Savvy Scotsman Remains So Important
Trump sons trigger 'corruption' uproar as Pentagon drone venture surfaces amid Iran war
Will the Dollar be a Casualty of the Iran War?
The Pentagon is looking for the SpaceX of the ocean.
Major milestone by 3D printing an artificial cornea using a specialized "bioink"...
Scientists at Rice University have developed an exciting new two-dimensional carbon material...
Footage recorded by hashtag#Meta's AI smart glasses is sent to offshore contractors...
ELON MUSK: "With something like Neuralink… we effectively become maybe one with the AI."
DARPA Launches New Program Generative Optogenetics, GO,...
Anthropic Outpaces OpenAI Revenue 10X, Pentagon vs. Dario, Agents Rent Humans | #234
Ordering a Tiny House from China, what's the real COST?
New video may offer glimpse of secret F-47 fighter
Donut Lab's Solid-State Battery Charges Fast. But Experts Still Have Questions

We're racing towards a future in which devices will be able to read our thoughts.
You see signs of it everywhere, from brain-computer interfaces to algorithms that detect emotions from facial scans. And though the tech remains imperfect, it's getting closer all the time: now a team of scientists say they've developed a model that can generate descriptions of what people's brains are seeing by simply analyzing a scan of their brain activity.
They're calling the technique "mind captioning," and it may represent an effective way for transcribing what someone's thinking, with impressively comprehensive and accurate results.
"This is hard to do," Alex Huth, coauthor of a new study in the journal Science Advances, and a computational neuroscientist at the University of California, Berkeley, told Nature. "It's surprising you can get that much detail."
The implications of such technology are a double-edged sword: on the one hand, it could give a voice to people who struggle speaking due to stroke, aphasia, and other medical difficulties, but on the other hand, it may threaten our mental privacy in an age when many other facets of our lives are surveilled and codified. But the team stress the model can't decode your private thoughts. "Nobody has shown you can do that, yet," Huth added.
The researchers' new technique relies on several AI models. To train them, first a deep language model analyzed the text captions in more than 2,000 short form videos, generating unique "meaning signature." Then another AI tool was trained on the MRI brain scans of six participants while they watched the same videos, matching the brain activity to the signatures.
Combined, the resulting brain decoder could analyze a new brain scan from someone watching a video and predict the meaning signature, while an AI text generator scoured for sentences that matched the predicted signature, creating dozens of candidate descriptions and refining them along the way.
While it sounds like an elaborate chain of guessing games, the results were remarkably descriptive and mostly on the money. According to Nature, by analyzing the brain activity of a participant who watched a video of someone jumping from the top of a waterfall, the AI model initially predicted the string "spring flow," refined that into "above rapid falling water fall" on the tenth guess, and finally landed on "a person jumps over a deep water fall on a mountain ridge" on the 100th guess.
Overall, the generated text descriptions achieved a 50 percent accuracy in identifying the correct video out of 100 possibilities. That's significantly higher than random chance, which would be around one percent, and impressive in the context of essentially divining coherent thoughts out of brain patterns.