>
1000s Evacuated As Massive Wall Of Water Surges Through Ukraine After Major Dam 'Blown Up'
Journalists Are Asking Ukrainian Soldiers To Hide Their Nazi Patches, NYT Admits
Comey: Imagine A "Retribution Presidency" Where The President Ordered The...
El Salvador Unleashes "Volcano Energy" With 241 Megawatt Planned Bitcoin Mining Operation
Newly Developed Humanoid Robot Warns About AI Creating "Oppressive Society"
Scientists develop mega-thin solar cells that could be shockingly easy to produce:
High-tech pen paints healing gel right into wounds
EG4 18K after 1 Megawatt Hour! Is it worth the $$$?
Terminator-style Synthetic Covering for Robots Mimics Human Skin and Heals Itself
The Death of 2FA (2 Factor Authentication)? + Q&A
High-speed orbital data link drags space communications out of the '60s
WORLD'S FIRST 3D PRINTED CLAY HOUSES
Smaller, cheaper, safer: The next generation of nuclear power, explained
The agent, which Deepmind refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report Deepmind describe the model and the data, and document the current capabilities of Gato.
A generalist agent. Gato can sense and act with different embodiments across a wide range of environments using a single neural network with the same set of weights. Gato was trained on 604 distinct tasks with varying modalities, observations and action specifications.
Transformer sequence models are effective as multi-task multi-embodiment policies, including for real-world text, vision and robotics tasks. They show promise as well in few-shot out-of-distribution task learning. In the future, such models could be used as a default starting point via prompting or fine-tuning to learn new behaviors, rather than training from scratch.
Given scaling law trends, the performance across all tasks including dialogue will increase with scale in parameters, data and compute. Better hardware and network architectures will allow training bigger models while maintaining real-time robot control capability. By scaling up and iterating on this same basic approach, Deepmind can build a useful general-purpose agent.