>
Grow Unlimited Protein In A Trash Can. No Soil. No Sunlight. The 5 Min Setup.
Why Amish Gardens Never Have Pests -- The One "Stick" Method Corporations Hate.
7 MINUTE BUILD | Off-Grid DIY Underground Earthbag Pantry, Root Cellar, and Storm Shelter
Sedentary Adulthood Increases Biological Stress Levels by Middle Age
The Pentagon is looking for the SpaceX of the ocean.
Major milestone by 3D printing an artificial cornea using a specialized "bioink"...
Scientists at Rice University have developed an exciting new two-dimensional carbon material...
Footage recorded by hashtag#Meta's AI smart glasses is sent to offshore contractors...
ELON MUSK: "With something like Neuralink… we effectively become maybe one with the AI."
DARPA Launches New Program Generative Optogenetics, GO,...
Anthropic Outpaces OpenAI Revenue 10X, Pentagon vs. Dario, Agents Rent Humans | #234
Ordering a Tiny House from China, what's the real COST?
New video may offer glimpse of secret F-47 fighter
Donut Lab's Solid-State Battery Charges Fast. But Experts Still Have Questions

So he helped found a research nonprofit, OpenAI, to help cut a path to "safe" artificial general intelligence, as opposed to machines that pop our civilization like a pimple. Yes, Musk's very public fears may distract from other more real problems in AI. But OpenAI just took a big step toward robots that better integrate into our world by not, well, breaking everything they pick up.
OpenAI researchers have built a system in which a simulated robotic hand learns to manipulate a block through trial and error, then seamlessly transfers that knowledge to a robotic hand in the real world. Incredibly, the system ends up "inventing" characteristic grasps that humans already commonly use to handle objects. Not in a quest to pop us like pimples—to be clear.
The researchers' trick is a technique called reinforcement learning. In a simulation, a hand, powered by a neural network, is free to experiment with different ways to grasp and fiddle with a block. "It's just doing random things and failing miserably all the time," says OpenAI engineer Matthias Plappert. "Then what we do is we give it a reward whenever it does something that slightly moves it toward the goal it actually wants to achieve, which is rotating the block."