>
Building a Legacy - Remembering Charlie Kirk - Memorial from State Farm Stadium, Glendale AZ
Analysis of 'brown shirt man' at Charlie Kirk event where he was shot...
The wealthiest tech & finance billionaires meet at Windsor Castle in the U.K. w/ Trump, King Charles
While Californians continue to suffer through unprecedented drought, one billionaire couple...
This "Printed" House Is Stronger Than You Think
Top Developers Increasingly Warn That AI Coding Produces Flaws And Risks
We finally integrated the tiny brains with computers and AI
Stylish Prefab Home Can Be 'Dropped' into Flooded Areas or Anywhere Housing is Needed
Energy Secretary Expects Fusion to Power the World in 8-15 Years
ORNL tackles control challenges of nuclear rocket engines
Tesla Megapack Keynote LIVE - TESLA is Making Transformers !!
Methylene chloride (CH2Cl?) and acetone (C?H?O) create a powerful paint remover...
Engineer Builds His Own X-Ray After Hospital Charges Him $69K
Researchers create 2D nanomaterials with up to nine metals for extreme conditions
So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results — not knowing they've already doomed us all.
Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence — a big "if," some have argued — we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."