>
War on Words: Both Parties Try to Silence Speech They Don't Like
Low Interest Rates Don't Have the Stimulus the Economy Craves
"What's About To Happen Is Not A Coincidence" | Whitney Webb
Future of Satellite of Direct to Cellphone
3D Printed Aluminum Alloy Sets Strength Record on Path to Lighter Aircraft Systems
Big Brother just got an upgrade.
SEMI-NEWS/SEMI-SATIRE: October 12, 2025 Edition
Stem Cell Breakthrough for People with Parkinson's
Linux Will Work For You. Time to Dump Windows 10. And Don't Bother with Windows 11
XAI Using $18 Billion to Get 300,000 More Nvidia B200 Chips
Immortal Monkeys? Not Quite, But Scientists Just Reversed Aging With 'Super' Stem Cells
ICE To Buy Tool That Tracks Locations Of Hundreds Of Millions Of Phones Every Day
Yixiang 16kWh Battery For $1,920!? New Design!
Find a COMPATIBLE Linux Computer for $200+: Roadmap to Linux. Part 1
Google has curated a set of YouTube clips to help machines learn how humans exist in the world. The AVAs, or "atomic visual actions," are three-second clips of people doing everyday things like drinking water, taking a photo, playing an instrument, hugging, standing or cooking.
Each clip labels the person the AI should focus on, along with a description of their pose and whether they're interacting with an object or another human.
"Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge," Google wrote in a recent blog post describing the new dataset. "This is due to the fact that actions are, by nature, less well-defined than objects in videos."
The catalog of 57,600 clips only highlights 80 actions but labels more than 96,000 humans. Google pulled clips from popular movies, emphasizing that they drew from a "variety of genres and countries of origin."