>
300 WSM Ballistics Tables From Common Ammo Makers
Oil Tankers Turning Off Transponders – 10% of Regular Volume in Dark Fleet Made Hormuz Passage
Meet DARPA's X-76: The rotorcraft that flies like a jet
You've Been LIED to About Healthy Eating: The Truth Could Add Decades to Your Life
The Pentagon is looking for the SpaceX of the ocean.
Major milestone by 3D printing an artificial cornea using a specialized "bioink"...
Scientists at Rice University have developed an exciting new two-dimensional carbon material...
Footage recorded by hashtag#Meta's AI smart glasses is sent to offshore contractors...
ELON MUSK: "With something like Neuralink… we effectively become maybe one with the AI."
DARPA Launches New Program Generative Optogenetics, GO,...
Anthropic Outpaces OpenAI Revenue 10X, Pentagon vs. Dario, Agents Rent Humans | #234
Ordering a Tiny House from China, what's the real COST?
New video may offer glimpse of secret F-47 fighter
Donut Lab's Solid-State Battery Charges Fast. But Experts Still Have Questions

So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results — not knowing they've already doomed us all.
Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence — a big "if," some have argued — we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."