>
Deporting Illegals Is Legal - Military In America's Streets Is Not!
Turn Your Homesteading into a Farm (Making Money on the Homestead) | PANTRY CHAT
"History Comes In Patterns" Neil Howe: Civil War, Market Crashes, and The Fourth Turning |
How Matt Gaetz Escaped Greenberg's Honeypot and Exposed the Swamp's Smear Campaign
Forget Houston. This Space Balloon Will Launch You to the Edge of the Cosmos From a Floating...
SpaceX and NASA show off how Starship will help astronauts land on the moon (images)
How aged cells in one organ can cause a cascade of organ failure
World's most advanced hypergravity facility is now open for business
New Low-Carbon Concrete Outperforms Today's Highway Material While Cutting Costs in Minnesota
Spinning fusion fuel for efficiency and Burn Tritium Ten Times More Efficiently
Rocket plane makes first civil supersonic flight since Concorde
Muscle-powered mechanism desalinates up to 8 liters of seawater per hour
Student-built rocket breaks space altitude record as it hits hypersonic speeds
Researchers discover revolutionary material that could shatter limits of traditional solar panels
Imagine you're a parent locked in a custody dispute, and a video emerges of you abusing your child; or that you're a police officer, and you're seen on video brutalizing a suspect; or that you're a teacher "caught" on video beating a young student; or that a video goes public of your favorite politician engaging in serious sexual misconduct. Now imagine that the guilty party is actually the person who made the video — because it looks real, but isn't.
Welcome to the brave new world of "deepfake."
It has been said that seeing is believing. But this may change, at least regarding online content, with "perfectly real" faked videos, which associate computer science professor Hao Li says are perhaps just months away.
As the International Business Times reports, "Morphed images and videos that appear 'perfectly real' in everyday life will be accessible to [average] people within six months or a year, computer graphics entrepreneur Hao Li has said. The revolutionary technique may bother the fact checkers but for animation films it may be a game changer soon."
"'In some ways, we already know how to do it, but it is only a matter of training with more data and implementation' to make manipulated graphics appear real, the Taiwanese descent deepfake pioneer said," the site also informs.
"The technology of 'deepfake' — the process to manipulate videos or digital representation using computers and machine-learning software to make them appear real, even though they are not — has given rise to concerns about how these creations could cause confusion and propagate misinformation, especially in the context of global politics," the Times continues.
In fact, online "disinformation through targeted social-media campaigns and apps such as WhatsApp has already roiled elections around the world," CNBC adds.
"'It's still very easy, you can tell from the naked eye most of the deepfakes,' Li, an associate professor of computer science at the University of Southern California, said on 'Power Lunch,'" CNBC also tells us.
"'But there also are examples that are really, really convincing,' Li said, adding those require 'sufficient effort' to create."
Li had previously predicted, at a Massachusetts Institute of Technology conference just last week, that perfect deepfakes were just "two to three years" away. But "Li said recent developments, in particular the emergence of the wildly popular Chinese app Zao and the growing research focus, have led him to 'recalibrate' his timeline," CNBC further reports.
"Zao is a face-swapping app that allows users to take a single photograph and insert themselves into popular TV shows and movies. It is among China's most popular apps, although significant privacy concerns have arisen," the site further relates.
"'Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions,' Li said on 'Power Lunch,'" CNBC continues.
Li said that the problem with deepfakes isn't the technology's existence, but that it can be used for evil as well as good. While it's true that any tool — whether nukes or guns or robots or the Internet — can be used for good or ill and that it's too often the latter given man's fallen nature, the technology-specific problem wouldn't exist if the technology didn't. The point, however, is that since it will be developed by someone, all we can do is try to stay a step ahead of the miscreants.
Thus does Li say that academic research is imperative. "'If you want to be able to detect deepfakes, you have to also see what the limits are,' Li said," CNBC also writes. "'If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it's impossible to detect those if you don't know how they work.'"