>
Former White House Advisor: "Trump to Release $150 Trillion Endowment"
The Mayo Clinic just tried to pull a fast one on the Trump administration...
'Cyborg 1.0': World's First Robocop Debuts With Facial Recognition And 360° Camera Visio
Dr. Aseem Malhotra Joins Alex Jones Live In-Studio! Top Medical Advisor To HHS Sec. RFK Jr. Gives...
Scientists reach pivotal breakthrough in quest for limitless energy:
Kawasaki CORLEO Walks Like a Robot, Rides Like a Bike!
World's Smallest Pacemaker is Made for Newborns, Activated by Light, and Requires No Surgery
Barrel-rotor flying car prototype begins flight testing
Coin-sized nuclear 3V battery with 50-year lifespan enters mass production
BREAKTHROUGH Testing Soon for Starship's Point-to-Point Flights: The Future of Transportation
Molten salt test loop to advance next-gen nuclear reactors
Quantum Teleportation Achieved Over Internet For The First Time
Watch the Jetson Personal Air Vehicle take flight, then order your own
Microneedles extract harmful cells, deliver drugs into chronic wounds
Google has curated a set of YouTube clips to help machines learn how humans exist in the world. The AVAs, or "atomic visual actions," are three-second clips of people doing everyday things like drinking water, taking a photo, playing an instrument, hugging, standing or cooking.
Each clip labels the person the AI should focus on, along with a description of their pose and whether they're interacting with an object or another human.
"Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge," Google wrote in a recent blog post describing the new dataset. "This is due to the fact that actions are, by nature, less well-defined than objects in videos."
The catalog of 57,600 clips only highlights 80 actions but labels more than 96,000 humans. Google pulled clips from popular movies, emphasizing that they drew from a "variety of genres and countries of origin."