>
Tell General Mills To Reject GMO Wheat!
Climate Scientists declare the climate "emergency" is over
Trump's Cabinet is Officially Complete - Meet the Team Ready to Make America Great Again
Former Polish Minister: At Least Half of US Aid Was Laundered by Ukrainians...
Forget Houston. This Space Balloon Will Launch You to the Edge of the Cosmos From a Floating...
SpaceX and NASA show off how Starship will help astronauts land on the moon (images)
How aged cells in one organ can cause a cascade of organ failure
World's most advanced hypergravity facility is now open for business
New Low-Carbon Concrete Outperforms Today's Highway Material While Cutting Costs in Minnesota
Spinning fusion fuel for efficiency and Burn Tritium Ten Times More Efficiently
Rocket plane makes first civil supersonic flight since Concorde
Muscle-powered mechanism desalinates up to 8 liters of seawater per hour
Student-built rocket breaks space altitude record as it hits hypersonic speeds
Researchers discover revolutionary material that could shatter limits of traditional solar panels
Google has curated a set of YouTube clips to help machines learn how humans exist in the world. The AVAs, or "atomic visual actions," are three-second clips of people doing everyday things like drinking water, taking a photo, playing an instrument, hugging, standing or cooking.
Each clip labels the person the AI should focus on, along with a description of their pose and whether they're interacting with an object or another human.
"Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge," Google wrote in a recent blog post describing the new dataset. "This is due to the fact that actions are, by nature, less well-defined than objects in videos."
The catalog of 57,600 clips only highlights 80 actions but labels more than 96,000 humans. Google pulled clips from popular movies, emphasizing that they drew from a "variety of genres and countries of origin."