Log In

Deepfake Technology: How AI is transforming media manipulation, ET BrandEquity

Published 1 month ago5 minute read

Call it the age of artificial deception. The rise of deepfake technology, as demonstrated by fabricated videos involving figures like Justin Trudeau and others, has led to widespread misinformation and deception globally. Despite efforts for stricter regulations, these AI-generated videos continue to pose significant threats, emphasising the need for comprehensive legal frameworks and increased public awareness to combat their impact.

guest author

Read by: 100 Industry Professionals

Reader Image Read by 100 Industry Professionals

<p>Representative AI Generated Image</p>
Representative AI Generated Image


A few weeks ago, a video appeared on Instagram, showing Canadian Prime Minister Justin Trudeau announcing his resignation. In the video, he admits to the shortcomings of his policies, seeming to openly mock himself. He jokes at his own incompetence, and seems to show callous disregard for the Canadian people. Finally, he hands off Canada to “whoever is brave enough to clean up this mess.” Of course, this never actually happened: this video is a deepfake, fabricated by AI.

Deepfakes are a textbook example of misuse of new technology. Even the term betrays its origin in cutting edge technology: the word ‘deepfake’ combines ‘deep learning’ with ‘fake’, deep learning being a new and powerful kind of Machine Learning. As legislation plays catchup, deepfakes have become an epidemic in the world of cybercrime, its impact spreading alarmingly fast. Deepfakes first emerged in 2017, when an anonymous Reddit user posted a deepfake video featuring celebrity faces swapped into adult film scenes. The technology has since advanced at an astonishing pace, making these fabricated videos more convincing than ever.

In just the month of December, there have been countless reports of these fabricated videos and images from across the world. In addition to the Trudeau deepfake, another video featuring Sudha Murthy promoting a trading platform to young people was widely shared, though it was later revealed to have been made by manipulating footage from a 2022 video from an Infosys event.

Deepfakes have a particularly extensive impact in countries like India, where lower literacy rates, and less knowledge of what AI forgery is capable of allows for the rapid spread of misinformation. AI-generated videos of Mamata Banerjee and Narendra Modi dancing in front of a huge crowd were circulated to mock political rivals, despite no such events taking place in reality happening during the general elections. This year, celebrities like Rashmika Mandanna, Katrina Kaif, and Alia Bhatt were also targeted by fake, often obscene, videos, while a deepfake of Virat Kohli made him appear to make disturbing comments. Popular actor Anil Kapoor has even sought legal protection against use of his name, voice, identity, image, or persona in public without his consent after demeaning deepfake videos of him emerged.

The rise of deepfakes has sparked calls for stronger regulations, especially as political figures and organizations become targets of this deceptive technology. In December itself, Taiwan's Democratic Progressive Party (DPP) leaders demanded stricter content controls on platforms like TikTok after a deepfake video surfaced, falsely portraying one of their own leaders criticizing their own administration. Deepfakes are being used as a tool to create chaos and confusion, misleading internet users with fabricated visuals. In 2018, China became one of the first countries to issue guidelines aimed at regulating deepfakes, requiring AI-generated media to be clearly labeled. U.S. lawmakers have introduced bills like the “Malicious Deep Fake Prohibition Act,” making the creation and distribution of deepfake videos with the intent to harm, deceive, or defraud illegal, followed by strict punishment. However, the global nature of the internet, and the possibility of bad actors from outside the country means that deepfakes remain a threat to national security.

In India, there is no particular law enforced against deepfakes: sections 66D and 66E of the Information Technology Act, 2000 ("IT Act") impose penalties, including imprisonment and fines, on individuals who engage in cheating by impersonating someone else. In the aforementioned Anil Kapoor case, the Delhi High Court ruled in his favour, allowing him to more easily get fraudulent content of himself taken down. However, these provisions alone are insufficient to tackle the rapidly widening issue of identifying and preventing the spread of abusive deepfake content.

As deepfake technology develops further, it becomes increasingly obvious that addressing the threats it presents will require a multifaceted approach. Governments must enact stricter laws, but malicious content can come from anywhere, so individual government policy can only be so effective: social media platforms need to be more accountable for identifying and eliminating dangerous content.

Finally, people must become better at identifying forged content - a deepfake is only dangerous if people believe it is real. These risks can be mitigated by understanding the signs of deepfaked material: irregularities in facial expression, inconsistent reflections and audio that does not match up with video. Exercising extra caution when coming across any content that’s even slightly suspicious is the only way of fighting deepfake frauds. We must act proactively to protect truth, privacy, and the integrity of our common digital environment. Else, we will enter an age where either the media will be able to mislead the population, or the population will become so distrustful of the media that the truth will be difficult to discern.

(The author is an AI expert and Software Engineer at Observe Inc. Opinions are personal. The article is for general information purposes only.

ETBrandEquity.com makes no representations or warranties of any kind, express or implied, about the accuracy, adequacy, validity, reliability, availability, or completeness of any information. It does not assume any responsibility or liability for any errors, omissions, or damages arising from the use of this information.

We reserve the right to modify or remove any content without prior notice. The reproduction, distribution, or storage of any content without written permission is strictly prohibited.)

  • Published On Jan 17, 2025 at 09:11 AM IST

Newsletter icon
Origin:
publisher logo
ETBrandEquity
Loading...
Loading...
Loading...

You may also like...