
Rising Deepfake Misinformation
Thanks to developments in artificial intelligence, creating hyper-realistic images and films that distort reality is simpler than it has ever been. Often without the knowledge or permission of the people involved, deepfake technology generates wholly synthetic scenarios or swaps faces using machine learning.
An image of Rahul Gandhi in a manufactured setting with columnist Jyoti Malhotra became widespread on social media in recent months and many people started to assume it was real. Such events show how quickly deepfakes can proliferate, cause uncertainty, and tarnish public and personal reputations before fact-checking and law enforcement act.
Setting the Offence under Bharatiya Nyaya Sanhita
Approved in 2023, the Bharatiya Nyaya Sanhita modernizes India’s criminal laws to handle modern issues including the abuse of digital technologies. Many of the passages in Chapter XIV, which addresses false evidence and crimes against public justice, closely relate to deepfake fabrications:
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
• Section 228 (Fabricating False Evidence) forbids producing any false evidence used to distort administrative or court procedures. Fake evidence can come from a deepfake meant to suggest misbehavior by a public personality.
• Section 229 (Punishment for False Evidence) calls for fines and jail time for those who offer or create false evidence. If fraudulent evidence is used to shape public opinion or legal processes, the degree of penalty rises.
• Section 233 (Using Evidence Known to be False) fines anyone who deliberately rely on or disseminate false evidence. After realizing a deepfake image is fake, sharing it puts the disseminator in criminal hot water.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
False statements made in declarations or affidavits are covered under Section 236 (False Statement in Declaration Receivable as Evidence). Although mostly focused on sworn declarations, courts could read deepfake entries in digital affidavits under this clause.
• Section 336, Online Forgery and Email Spoofing, addresses electronic record-based forgery. Creating or changing digital photos to distort reality fits this crime.
• Section 356 (Defamation by Electronic Means) makes spreading defamatory content via digital media unlawful. This clause can be set off by a deepfake damaging someone’s reputation.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
For More Updates & Regular Notes Join Our Whats App Group (https://chat.whatsapp.com/DkucckgAEJbCtXwXr2yIt0) and Telegram Group ( https://t.me/legalmaestroeducators )
Important IT Law Protects Against Digital Theft
Complementing the BNS, the Information Technology Act, 2000 offers focused actions to fight cyber-enabled crime:
• Section 66C (Identity Theft) penalizes anybody who falsely utilizes another person’s unique identification feature—such as a digital likeness—to mislead or damage. Under this part Deepfake makers using a public figure’s visage without authorization could be responsible.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
• Section 66D (Cheating by Personation) tackles computer resource impersonation. A deepfake video or image meant to represent a public person in a compromising behavior might be considered as personation cheating.
• Section 69A (Blocking Power) gives the government authority to require intermediaries to prohibit access to any digital content deemed libelous, endangering public order, or compromising of sovereignty. Platforms can be required to delete the deepfake image within thirty-six hours after alert.
• Section 79 (Safe-Harbor Exception) mandates intermediaries apply due diligence—that is, apply grievance redressal systems—to qualify for protection. Ignoring a reported deepfake quickly will terminate their safe-harbor protection and expose them to responsibility.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
Mechanisms for Enforcement and Accountabilities
Affected people can seek police under the appropriate BNS and IT Act clauses when a deepfake surfaces. The BNS sections let makers and distributors of false evidence and defamatory information be criminalised.
Under the IT comply, middlemen like social media sites have to comply on takedown demands or face losing their protection. Under Section 69A, government blocking instructions guarantee quick removal of materials judged detrimental to public order or personal rights.
Digitally forensics is becoming more and more important for investigating authorities to track the source of deepfakes, name tools used, and punish offenders responsible. Courts have shown readiness to award temporary relief—such as blocking orders—and to inflict fines or imprisonment upon conviction.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
Civil remedies—including defamation lawsuits—can coexist with criminal investigations and provide strong tools for tracking and controlling AI-generated material.
Difficulties Controlling Deepfake False Information
Strong legislation notwithstanding still challenges for law enforcement. Often generated and disseminated anonymously, across borders, deepfakes complicate jurisdiction and evidence collecting.
The fast speed of artificial intelligence development means that modified versions could already have multiplied by the time a deepfake is detected. Intermediaries vary in their capacity to identify synthetic media and response. Furthermore there is the possibility of over-blocking, in which case authorized speech is inadvertently deleted under general blocking authority.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.
Dealing with these issues calls for a multimodal strategy including public digital literacy enhancement, law enforcement, tech platform, law expert, law platform cooperation, and detection technology investment enhancement. To discourage authors of harmful deepfakes, clear rules for rapid takedown, consistent reporting forms, and international cooperation are absolutely vital.
Rahul Gandhi and Jyoti Malhotra’s false AI-generated image captures the harm deepfakes create to people and the democratic process. Anchored by the Bharatiya Nyaya Sanhita’s clauses on false evidence, forgery, and defamation as well as the Information Technology Act’s protections against identity theft, personation, and online material, India’s legal system provides all-encompassing means to fight such disinformation.
Technical innovation and public awareness will help to promote effective enforcement, which will be essential to make artificial intelligence serve society without compromising truth and trust.
For any queries or to publish an article or post or advertisement on our platform, do call at +91 6377460764 or email us at contact@legalmaestros.com.