>
Tell General Mills To Reject GMO Wheat!
Climate Scientists declare the climate "emergency" is over
Trump's Cabinet is Officially Complete - Meet the Team Ready to Make America Great Again
Former Polish Minister: At Least Half of US Aid Was Laundered by Ukrainians...
Forget Houston. This Space Balloon Will Launch You to the Edge of the Cosmos From a Floating...
SpaceX and NASA show off how Starship will help astronauts land on the moon (images)
How aged cells in one organ can cause a cascade of organ failure
World's most advanced hypergravity facility is now open for business
New Low-Carbon Concrete Outperforms Today's Highway Material While Cutting Costs in Minnesota
Spinning fusion fuel for efficiency and Burn Tritium Ten Times More Efficiently
Rocket plane makes first civil supersonic flight since Concorde
Muscle-powered mechanism desalinates up to 8 liters of seawater per hour
Student-built rocket breaks space altitude record as it hits hypersonic speeds
Researchers discover revolutionary material that could shatter limits of traditional solar panels
In the not-too-distant future, we'll have plenty of reasons to want protect ourselves from facial detection software. Even now, companies from Facebook to the NFL and Pornhub already use this technology to identify people, sometimes without their consent. Hell, even our lifelines, our precious phones, now use our own faces as a password.
But as fast as this technology develops, machine learning researchers are working on ways to foil it. As described in a new study, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a robust, scalable, and inconspicuous way to fool facial recognition algorithms into not recognizing a person.
This paper builds on the same group's work from 2016, only this time, it's more robust and inconspicuous. The method works in a wide variety of positions and scenarios, and doesn't look too much like the person's wearing an AI-tricking device on their face. The glasses are also scalable: The researchers developed five pairs of adversarial glasses that can be used by 90 percent of the population, as represented by the Labeled Faces in the Wild and Google FaceNet datasets used in the study.
It's gotten so good at tricking the system that the researchers made a serious suggestion to the TSA: Since facial recognition is already being used in high-security public places like airports, they've asked the TSA to consider requiring people to remove physical artifacts—hats, jewelry, and of course eyeglasses—before facial recognition scans.
It's a similar concept to how UC Berkeley researchers fooled facial recognition technology into thinking a glasses-wearer was someone else, but in that study, they toyed with the AI algorithm to "poison" it. In this new paper, the researchers don't fiddle with the algorithm they're trying to fool at all. Instead, they rely on manipulation of the glasses to fool the system. It's more like the 3D-printed adversarial objects developed by MIT, which tricked AI into thinking a turtle was a gun by adjusting a few pixels on an image of a turtle. Only this time, it's tricking the algorithm into thinking one person is another, or not a person at all.