>
Creating the First Synthetic Human D.N.A From Scratch
Texas Ready for $10M Bitcoin Purchase After Governor Signs Bill for State Reserve
How do you feel about this use of AI
Big Tech Executives Welcomed as Army Colonels, New Government AI Project Leaked
xAI Grok 3.5 Renamed Grok 4 and Has Specialized Coding Model
AI goes full HAL: Blackmail, espionage, and murder to avoid shutdown
BREAKING UPDATE Neuralink and Optimus
1900 Scientists Say 'Climate Change Not Caused By CO2' – The Real Environment Movement...
New molecule could create stamp-sized drives with 100x more storage
DARPA fast tracks flight tests for new military drones
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
How China Won the Thorium Nuclear Energy Race
Sunlight-Powered Catalyst Supercharges Green Hydrogen Production by 800%
Google has curated a set of YouTube clips to help machines learn how humans exist in the world. The AVAs, or "atomic visual actions," are three-second clips of people doing everyday things like drinking water, taking a photo, playing an instrument, hugging, standing or cooking.
Each clip labels the person the AI should focus on, along with a description of their pose and whether they're interacting with an object or another human.
"Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge," Google wrote in a recent blog post describing the new dataset. "This is due to the fact that actions are, by nature, less well-defined than objects in videos."
The catalog of 57,600 clips only highlights 80 actions but labels more than 96,000 humans. Google pulled clips from popular movies, emphasizing that they drew from a "variety of genres and countries of origin."