>
The New World Order Runs on Hydrocarbons
Georgia Woman Charged With Murder After Attempted Abortion
Tennessee Grandmother Spent 108 Days In Jail Because Facial Recognition Misidentified Her
We Build and Test Microwave Blocking Panels - Invisible to Radar
Man Successfully Designs mRNA Vaccine To Treat His Dog's Cancer
Watch: Humanoid robot gets surprisingly good at tennis
Low-cost hypersonic rocket engine takes flight for US Air Force
Your WiFi Can See You. Here's How.
Decentralizing Defense: A $96 Guided Rocket Just Put Precision Warfare into the Hands of the People
Israel's Iron Beam and the laser future of missile defense
Scientists at the Harbin University of Science and Technology have pioneered a sophisticated...
Researchers have developed a breakthrough "molecular jackhammer" technique...
Human trials are underway for a drug that regrows human teeth in just 4 days.

Google has curated a set of YouTube clips to help machines learn how humans exist in the world. The AVAs, or "atomic visual actions," are three-second clips of people doing everyday things like drinking water, taking a photo, playing an instrument, hugging, standing or cooking.
Each clip labels the person the AI should focus on, along with a description of their pose and whether they're interacting with an object or another human.
"Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge," Google wrote in a recent blog post describing the new dataset. "This is due to the fact that actions are, by nature, less well-defined than objects in videos."
The catalog of 57,600 clips only highlights 80 actions but labels more than 96,000 humans. Google pulled clips from popular movies, emphasizing that they drew from a "variety of genres and countries of origin."