>
Conservatives Are Being Targeted – Louder with Crowder CEO Warns / Redacted
"Liz Reitzig: Raw Milk Revolution"
Court Clears Trump to Defund Planned Parenthood
PepsiCo, Mars, ADM team up for regenerative agriculture project in Poland
Tesla Megapack Keynote LIVE - TESLA is Making Transformers !!
Methylene chloride (CH2Cl?) and acetone (C?H?O) create a powerful paint remover...
Engineer Builds His Own X-Ray After Hospital Charges Him $69K
Researchers create 2D nanomaterials with up to nine metals for extreme conditions
The Evolution of Electric Motors: From Bulky to Lightweight, Efficient Powerhouses
3D-Printing 'Glue Gun' Can Repair Bone Fractures During Surgery Filling-in the Gaps Around..
Kevlar-like EV battery material dissolves after use to recycle itself
Laser connects plane and satellite in breakthrough air-to-space link
Lucid Motors' World-Leading Electric Powertrain Breakdown with Emad Dlala and Eric Bach
Murder, UFOs & Antigravity Tech -- What's Really Happening at Huntsville, Alabama's Space Po
In the not-too-distant future, we'll have plenty of reasons to want protect ourselves from facial detection software. Even now, companies from Facebook to the NFL and Pornhub already use this technology to identify people, sometimes without their consent. Hell, even our lifelines, our precious phones, now use our own faces as a password.
But as fast as this technology develops, machine learning researchers are working on ways to foil it. As described in a new study, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a robust, scalable, and inconspicuous way to fool facial recognition algorithms into not recognizing a person.
This paper builds on the same group's work from 2016, only this time, it's more robust and inconspicuous. The method works in a wide variety of positions and scenarios, and doesn't look too much like the person's wearing an AI-tricking device on their face. The glasses are also scalable: The researchers developed five pairs of adversarial glasses that can be used by 90 percent of the population, as represented by the Labeled Faces in the Wild and Google FaceNet datasets used in the study.
It's gotten so good at tricking the system that the researchers made a serious suggestion to the TSA: Since facial recognition is already being used in high-security public places like airports, they've asked the TSA to consider requiring people to remove physical artifacts—hats, jewelry, and of course eyeglasses—before facial recognition scans.
It's a similar concept to how UC Berkeley researchers fooled facial recognition technology into thinking a glasses-wearer was someone else, but in that study, they toyed with the AI algorithm to "poison" it. In this new paper, the researchers don't fiddle with the algorithm they're trying to fool at all. Instead, they rely on manipulation of the glasses to fool the system. It's more like the 3D-printed adversarial objects developed by MIT, which tricked AI into thinking a turtle was a gun by adjusting a few pixels on an image of a turtle. Only this time, it's tricking the algorithm into thinking one person is another, or not a person at all.