LAST WEEK, DEEPFAKE expert and associate professor of computer science at the University of Southern California Hao Li predicted it would be two to three years before deepfakes were undetectable. Now he's revised that estimate to 12 months in an interview with CNBC, showing just how fast this technology is moving.
"Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions," Li told Power Lunch.
Everyday people will be able to make deepfakes that appear "perfectly real" within "half-a-year to a year,'' he added. While there are certainly positive uses for this in fashion and entertainment, as Li pointed out, there's also opportunity for 8chan mischief and for every politician in the world to deny video footage is real.
In a follow-up email after the interview, CNBC enquired as to why Li had revised his timeframe by two years in just a week. The answer? Increased research and the incredible Zao face-swapping app have made him "recalibrate" the timeline.
For Li, deepfake technology itself isn't the problem - it's people using the technology to deceive or harm. To paraphrase the old slogan of the gun lobby: deepfakes don't fool people, people fool people.
So how can we sidestep the worst consequences? "If you want to be able to detect deepfakes, you have to also see what the limits are," Li said. "If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it's impossible to detect those if you don't know how they work."
In other words: to catch a deepfake, you must think like a deepfake. Well, kind of. µ
You could soon buy that ivory backscratcher on Marketplace in a few taps
Just in case you're too posh for Whole Foods
Borked butterfly mechanism is dead
AI nirvana in the cloud