>
Austrian Economists Make Up The Superhero Needed To Save Civilization From Economic and Zionist...
Parts of Hawaii hit with snow -- but don't worry, it's actually quite normal
The FDA's War on America's Health
CLOWN SHOW: Chris Van Hollen Backtracks After Bukele Mocks Him for 'Sipping Margaritas'...
Scientists reach pivotal breakthrough in quest for limitless energy:
Kawasaki CORLEO Walks Like a Robot, Rides Like a Bike!
World's Smallest Pacemaker is Made for Newborns, Activated by Light, and Requires No Surgery
Barrel-rotor flying car prototype begins flight testing
Coin-sized nuclear 3V battery with 50-year lifespan enters mass production
BREAKTHROUGH Testing Soon for Starship's Point-to-Point Flights: The Future of Transportation
Molten salt test loop to advance next-gen nuclear reactors
Quantum Teleportation Achieved Over Internet For The First Time
Watch the Jetson Personal Air Vehicle take flight, then order your own
Microneedles extract harmful cells, deliver drugs into chronic wounds
So he helped found a research nonprofit, OpenAI, to help cut a path to "safe" artificial general intelligence, as opposed to machines that pop our civilization like a pimple. Yes, Musk's very public fears may distract from other more real problems in AI. But OpenAI just took a big step toward robots that better integrate into our world by not, well, breaking everything they pick up.
OpenAI researchers have built a system in which a simulated robotic hand learns to manipulate a block through trial and error, then seamlessly transfers that knowledge to a robotic hand in the real world. Incredibly, the system ends up "inventing" characteristic grasps that humans already commonly use to handle objects. Not in a quest to pop us like pimples—to be clear.
The researchers' trick is a technique called reinforcement learning. In a simulation, a hand, powered by a neural network, is free to experiment with different ways to grasp and fiddle with a block. "It's just doing random things and failing miserably all the time," says OpenAI engineer Matthias Plappert. "Then what we do is we give it a reward whenever it does something that slightly moves it toward the goal it actually wants to achieve, which is rotating the block."