>
Alternative Ways to Buy Farmland
LED lights are DEVASTATING our bodies, here's why | Redacted w Clayton Morris
How My Youtube Channel Makes Money
Travel gadget promises to dry and iron your clothes – totally hands-free
Perfect Aircrete, Kitchen Ingredients.
Futuristic pixel-raising display lets you feel what's onscreen
Cutting-Edge Facility Generates Pure Water and Hydrogen Fuel from Seawater for Mere Pennies
This tiny dev board is packed with features for ambitious makers
Scientists Discover Gel to Regrow Tooth Enamel
Vitamin C and Dandelion Root Killing Cancer Cells -- as Former CDC Director Calls for COVID-19...
Galactic Brain: US firm plans space-based data centers, power grid to challenge China
A microbial cleanup for glyphosate just earned a patent. Here's why that matters
Japan Breaks Internet Speed Record with 5 Million Times Faster Data Transfer

Shane Legg talks about the 1997 and later creation and popular definition of general intelligence.
Shane Legg, co-founder and Chief AGI Scientist at Google DeepMind, defines levels of artificial general intelligence (AGI) as a spectrum rather than a single binary threshold.
NOTE: He mentions a 1997 paper on nanotechnology security that defined AGI. I, Brian Wang, was at the Foresight Institute conferences in 1996 and 1997 where super artificial intelligence was debated. Foresight constantly had mind expanding debates about the limits of technology and going beyond limits.
This debate in 2011, had earlier versions in 1996 and 1997.
The AI results was felt to be inevitable when molecular nanotechnology happened. It turns out the molecular nanotechnology advances lagged AI advances.
1. Minimal AGI about 2027
2. Full AGI 3-6 years after minimal AGI (2030-2033)
3. Superintelligence is likely soon after Full AGI (say less than a year to two years)
Minimal AGI is an artificial agent that can perform all the kinds of cognitive tasks that typical humans can do. The bar is set at "typical" human performance to avoid being too low (where AI fails basic tasks humans easily handle) or too high (excluding many humans).
Current AI is uneven. Superhuman in areas like multilingual fluency or general knowledge, but weak in continual learning, visual/spatial reasoning (perspective in scenes or graph/diagram reasoning).
Legg expects minimal AGI in a few years, guessing around 2 years from late 2025. He thinks 2027–2028.
Full AGI is an AI that can achieve the full spectrum of human cognitive capabilities, including extraordinary feats (inventing new physics theories, composing groundbreaking symphonies, or producing revolutionary literature like Einstein or Mozart).
Artificial Superintelligence (ASI) is AI with the generality of AGI but far beyond human cognitive limits in capability. Legg acknowledges no perfect definition exists (every attempt has flaws), but it vaguely means vastly superior general intelligence.
He views human intelligence is NOT the upper limit of what is possible.