>
Irony, Pain, and Hope of Earth Day 2024
Americans Betrayed: Congress Clears Path To WWIII With Massive Aid Package...
Giants and Pre-Flood Technology With Explorer Timothy Alberino | GFKL #6
NYC High School Soccer Game Cancelled After Migrants Refuse To Leave The Field
Blazing bits transmitted 4.5 million times faster than broadband
Scientists Close To Controlling All Genetic Material On Earth
Doodle to reality: World's 1st nuclear fusion-powered electric propulsion drive
Phase-change concrete melts snow and ice without salt or shovels
You Won't Want To Miss THIS During The Total Solar Eclipse (3D Eclipse Timeline And Viewing Tips
China Room Temperature Superconductor Researcher Had Experiments to Refute Critics
5 video games we wanna smell, now that it's kinda possible with GameScent
Unpowered cargo gliders on tow ropes promise 65% cheaper air freight
Wyoming A Finalist For Factory To Build Portable Micro-Nuclear Plants
Imagine you're a parent locked in a custody dispute, and a video emerges of you abusing your child; or that you're a police officer, and you're seen on video brutalizing a suspect; or that you're a teacher "caught" on video beating a young student; or that a video goes public of your favorite politician engaging in serious sexual misconduct. Now imagine that the guilty party is actually the person who made the video — because it looks real, but isn't.
Welcome to the brave new world of "deepfake."
It has been said that seeing is believing. But this may change, at least regarding online content, with "perfectly real" faked videos, which associate computer science professor Hao Li says are perhaps just months away.
As the International Business Times reports, "Morphed images and videos that appear 'perfectly real' in everyday life will be accessible to [average] people within six months or a year, computer graphics entrepreneur Hao Li has said. The revolutionary technique may bother the fact checkers but for animation films it may be a game changer soon."
"'In some ways, we already know how to do it, but it is only a matter of training with more data and implementation' to make manipulated graphics appear real, the Taiwanese descent deepfake pioneer said," the site also informs.
"The technology of 'deepfake' — the process to manipulate videos or digital representation using computers and machine-learning software to make them appear real, even though they are not — has given rise to concerns about how these creations could cause confusion and propagate misinformation, especially in the context of global politics," the Times continues.
In fact, online "disinformation through targeted social-media campaigns and apps such as WhatsApp has already roiled elections around the world," CNBC adds.
"'It's still very easy, you can tell from the naked eye most of the deepfakes,' Li, an associate professor of computer science at the University of Southern California, said on 'Power Lunch,'" CNBC also tells us.
"'But there also are examples that are really, really convincing,' Li said, adding those require 'sufficient effort' to create."
Li had previously predicted, at a Massachusetts Institute of Technology conference just last week, that perfect deepfakes were just "two to three years" away. But "Li said recent developments, in particular the emergence of the wildly popular Chinese app Zao and the growing research focus, have led him to 'recalibrate' his timeline," CNBC further reports.
"Zao is a face-swapping app that allows users to take a single photograph and insert themselves into popular TV shows and movies. It is among China's most popular apps, although significant privacy concerns have arisen," the site further relates.
"'Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions,' Li said on 'Power Lunch,'" CNBC continues.
Li said that the problem with deepfakes isn't the technology's existence, but that it can be used for evil as well as good. While it's true that any tool — whether nukes or guns or robots or the Internet — can be used for good or ill and that it's too often the latter given man's fallen nature, the technology-specific problem wouldn't exist if the technology didn't. The point, however, is that since it will be developed by someone, all we can do is try to stay a step ahead of the miscreants.
Thus does Li say that academic research is imperative. "'If you want to be able to detect deepfakes, you have to also see what the limits are,' Li said," CNBC also writes. "'If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it's impossible to detect those if you don't know how they work.'"