AI-based face and voice swapping technology is the most notorious trend gripping the internet. The rise of deepfakes has been so alarming in recent months that many US and UK-based analysts fear it could rout AI’s business with unprecedented consequences. While data scientists are yet to figure out controlling measures to prevent Deepfake conspiracies in Politics, and cyber frauds, the question for many tech companies is what they are doing to prevent an average internet user from getting caught in this web of malicious identity theft!
According to a Deeptrance report, “the commoditization of tools for video editing and synthesis is contributing to the spread of deepfakes. A community of developers is growing around open source projects for creating these tools.”
But, First, What Is a Deepfake?
Deepfakes consist of the artificially generated image, voice, and video that carry a message never actually said or concurred by the person officially. It is mostly a dubbed, or superficially imposed face, video or audio on a recognizable celebrity or brand. The motive behind generating deepfakes is to wage a war on ‘truth’ with malicious intent to disrupt internet trends.
In the recent, Deepfakes have challenged the online audience with malicious messages imposed on high-profile figures such as Donald Trump, Barack Obama, and Mark Zuckerberg, in most cases shown making inflammatory statements. For AI engineers, the challenges are steep.
AI and Deep Learning Pushing Deepfakes into the Main Stream Digital
Not long ago, the fake imagery would be an art play of a few CGI experts. But today, AI graphic designers and image swapping artists have taken over the CGI and animation to spread lies. The programming scripts are easily available online and it is accessible within Open Source Community standards, similar to those of R, and Python.
With the advent of AI and Deep Learning techniques for Image Processing, Face Recognition, Voice Modulation, and other human simulation tools, deepfake synthesis is not tough. AI is used in manipulating and distributing inappropriate digital assets all over the internet at an unimaginable pace. The virulence of these activities allow groups to spread their propaganda across the web, social media, email, and prime video advertising platforms.
This is a video that explains how Deepfakes are generated and shared online.
Reddit was among the first to identify the role of AI and Deep Learning in Deepfake generation. It enacted a community rule to ban posting sexual imagery of a person without their consent. However, the roots of Deepfakes had already spread across the mobile and in-app ecosystem by the time this ban came into action. Not-safe-to-work applications such as FakeApp and Deepfacelab provide millions of users direct access to images and videos for face-swapping.
Clearly, the inventory of Deepfake imagery is growing in size and ruthlessness. And, this growth puts AI’s trust quotient under the scanner. Thankfully, the antidote is within the industry. The same technology that is used to create deepfakes can also be used to detect them from millions of images and voice assets. Deepfake publishers are not known for their perfection or attention to details, and AI analysts skim through these inconsistencies to sieve the truth from fake.
Leading AI analysts and data researchers from NVIDIA, Google and Microsoft are constantly working with diagnostic tools to identify deepfakes and imagery. Neural networking, Deep Learning, and Computer Vision techniques may be used to create fake imagery – and combined with AI to scale the fake news consumption.
Researchers from the Technical University of Munich and others introduce “FaceForensic”, a large-scale video dataset for training media forensic and deepfake detection tools. In 2018, drastic measures were identified to fight the events but nothing concrete is visible on the ground. For example. NGO AI Foundation raised $10 million of capital to develop a tool that combines Human Moderation and Machine Learning to identify malicious content meant to deceive people such as deepfakes. However, its commercial applications are yet to be fully documented for business operations.
In 2020, beware of what you share online and how you consume digital assets. Deepfakes is more real a problem than it was only six months ago.