Advances in technology have brought about countless changes in the way we interact with the world. In the communication industry, social media has become a ubiquitous platform for sharing everything from dissent to the coolest gigs in town. Automation and artificial intelligence (AI) are transforming industries and manufacturing like never before. However, on the flip side, advanced technologies such as AI are being manipulated by perpetrators to create hyper-realistic images and voices of people that could easily convince the unaware audience to react to their intended provocations; spotting such videos/audios is becoming extremely hard as the world of the deep fakes gets murkier.
Where It All Began
The coining of the term “deep fake” is attributed to a Reddit website user with the same username, who created a space on the site for sharing x-rated videos that used open-source face-swapping technology back in 2017. Since then, the term has grown to encompass “synthetic media applications” for developing realistic images of non-existent individuals. Applications like FakeApp that popped up thereafter only made the creation process that much more simple and easy. “Deep fake” combines the terms “deep learning” and “fake,” since the process of making deep fakes involves using deep learning, a subset of AI technology. As per experts, a deep-learning system studies images and videos of a target person from various angles and ultimately picks up the exact patterns of speech and behavior. Further, the finishing touch on the process is provided by GANs, or generative adversarial networks, to make it more lifelike and seemingly undetectable.
Obscuring the Real
In recent times, deep fake tech has been used for marketing, political satire and entertainment. The technology has been experimented with by big names such as tech entrepreneur Elon Musk and Hollywood actors Bruce Willis and Tom Cruise, among others, with a design to make attractive endorsements, albeit with bold disclaimer lines. On asking the marketers about using deep fakes for endorsements, they have referred to cost-efficiency as their biggest motivation compared to using a real celebrity.
Though the above instance can be seen in a lighter vein, “the technology can be used to make people believe something is real when it is not,” argues Peter Singer, a cybersecurity and defense-focused strategist and senior fellow at the New America think tank.
Deep fakes have become the perfect tools to spread misinformation that has the potential to instigate violence or damage a reputation if passed undetected. Countries like India are suddenly facing new terrorism challenges from radicalized individuals, also termed as “lone wolves,” “DIY or “freelancer” terrorists with no real connection with known terrorist groups, who are taking advantage of the internet and social media to spread propaganda and radical ideas. Furthermore, deep fakes and other technologies, such as autonomous systems and 3D printing, have become tools of weaponization for extremist groups.
How Can Telcos Fight Deep Fakes?
Telecom infrastructures are at the center of protecting business processes from the menace of deep fakes. Network operators must help the companies identify the vulnerable points, from connectivity to the use of the software. Frequent and comprehensive education of customers on technological solutions to secure their infrastructure should be a constant feature of the customer-relations strategy.
Social media and tech companies such as Facebook, WhatsApp, Google, YouTube, Twitter and others have become platforms for malicious deep-fake activity. These companies are constantly calling cybersecurity experts for collaboration to root out the activities of the deep fake on their platforms. One such real-time deep-fake detector, called FakeCatcher by Intel, has claimed a 96% accuracy rate in determining whether a video is genuine or fake. It is “the first real-time, deep-fake detector in the world that provides results in milliseconds,” according to Intel’s Responsible AI research. As per Intel, FakeCatcher analyzes authentic clues in real videos, like blood flow in the pixels of the video to signify humans. It uses spatiotemporal maps created by translating these signals gathered from all areas of the face and determining if a video is authentic or fake using deep learning on a web-based platform. Given the potential power of misinformation/disinformation from deep fakes, these promising solutions from tech companies are refreshingly welcome; however, their efficacy will only be determined with time.
From a regulatory perspective, the updated European Union Code of Practice on Disinformation sets ambitious commitments and measures aimed at countering online disinformation. With a more diverse range of stakeholders, the code ensures commitments to demonetizing the dissemination of disinformation; guaranteeing transparency of political advertising; enhancing cooperation with fact-checkers, and facilitating researchers' access to data for a more transparent, safe and trustworthy online environment. Regional regulatory bodies have also developed their agendas to fight the challenges of unlawful online activities in some form or another.
Despite the evolving regulatory frameworks to fight deep fakes, experts are wary of the deep fake technology getting more advanced and sophisticated, and the chances of such videos becoming all the more real are not far away.
As a defense, experts are pointing towards building a social immunity system whereby each one of us must be willing to ask the valid questions of who, what and why and verify the authenticity of such a video when we encounter one. Putting on our thinking caps and understanding the nuances of modern technology seems like a reasonable choice to insulate ourselves from the urban menace until a permanent solution has been found and established.