AnalysisFact Check

How to tackle the menace of desi DeepFakes!

rashmika deepfake

The recent video of a Deep fake video of actress Rashmika Mandanna has taken the internet by storm, where Rashmika’s face has been conveniently morphed to convincingly manipulate and generate fake video content, leading to the spread of misinformation and the erosion of trust in digital media. Many users shared the video, thinking its a personal video of the actress.

Later it was found by users that it was a deepfake video. X user Abhishek (@AbhishekSay) posted the original video and asked users to stay away from these kind of content. Bollywood superstar Amitabh Bachchan also shared the video for a better reach. Abhishek has urged for an urgent need for a legal and regulatory framework to deal with deepfakes in India.

The original video is of Zara Patel, a British-Indian girl with 415K followers on Instagram. She uploaded this video on Instagram on 9 October, the viral video is perfect enough for ordinary social media users to fall for it. But if you watch the video carefully, you can see at (0:01) that when Rashmika (deepfake) was entering the lift, suddenly her face changed from the other girl to Rashmika.

The perfection with which this technology has been used has left people bewildered and scared, questioning the need to control this technology with a legal and regulatory framework.

What is a DeepFake?

Deepfake technology involves leveraging advanced computer capabilities and deep learning techniques to manipulate videos, images, and audio content. This technology is often exploited to create false news stories and engage in financial fraud, among other illicit activities.

It overlays a digital composite over an already-existing video, picture, or audio; cybercriminals use Artificial Intelligence technology. Deepfake technology is now being used for nefarious purposes like scams and hoaxes, celebrity pornography, election manipulation, social engineering, automated disinformation attacks, identity theft financial fraud, etc.

The term “deep fake” was coined by a Reddit user who went by the same username. In 2017, they established a space on the site for sharing adult content that utilized open-source face-swapping technology. Since then, the term has evolved to encompass “synthetic media applications” that generate convincing images of fictitious individuals. Subsequent applications like FakeApp further streamlined the creation process, making it more accessible and straightforward. “Deep fake” is a blend of “deep learning” and “fake,” reflecting the fact that deep learning, a subset of AI technology, is integral to the production of deep fakes.

The Loan App Blackmailing!

In recent times, many cases came up where young people committed suicide over blackmailing by Loan apps. The technology at the root of the blackmailing was deepfake only. Not just India, the loan apps are spread in Pakistan, China as well. The loan apps have destroyed many lives using this technology.

How to check if the video is a deepfake

Be skeptical of such content, that looks suspicious. If the video has any glitches, you should look more deeper. Following are few such examples to confirm if the video is authentic or not

  • Analyze facial inconsistencies : The Video might have inconsistency on certain frames. Usually the eyes and lips.
  • Check for visual glitches: During facial transformation while smiling or crying the glitches can be visible
  • Examine audio: The audio might sound a little off the place in these videos
  • Reverse image search: You can simply take a screenshot of the video and do a reserve image search using Google Image Search tool It will give you the reference videos.
  • Metadata analysis: This is also a very good analysis for video information and the source.
  • Cross-reference with other sources: You can cross reference with the original social media accounts of the people involved or some trustworthy sources to verify the video before forwarding.
  • Contextual analysis: Try to understand the context of the video. Will the person in the video, ever let someone take a video like that? Think that!

The solution lies in enhancing media literacy raising awareness about the technology among people, and gradually bringing in a behavioral change to pause before sharing anything on social media, with technological interventions to prevent the misuse of AI. In current day and age, every Indian has a mobile but the literacy can’t be guaranteed. Such people fall prey to such deepfakes first.

Apart from finding the deepfakes, we also have to ensure some action on the wrong-doers. Safeguarding truth and upholding freedom of expression requires a comprehensive strategy involving various stakeholders and methods that also includes legislative measures and platform policies to implement effective and ethical countermeasures to combat the potential harm posed by malicious deep fakes.

What’s your Reaction?
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
Shares: