When seeing is no longer believing: Deepfakes and laws

0

WHEN SEEING IS NO LONGER BELIEVING: DEEPFAKES AND LAWS
For decades, photographs and videos were treated as unquestionable proof. A picture and a video was often enough to establish truth. But lately, scrolling through social media, we have begun to question that belief. AI-Generated images and videos popularly known as deepfakes are everywhere. Today, anyone with a smartphone can generate convincing fake visuals within minutes. While some may be harmless or creative, many are deeply harmful. This raises a simple but troubling question: if we can no longer trust what we see, how does the law respond?
Deepfakes are not limited to celebrities anymore, school and college students are increasingly becoming victims of AI-Generated images created without their consent. What is frightening is how easy it has become. Deepfakes use AI to manipulate faces, voices or bodies in images and videos. What makes this technology especially risky is how easily it is available to anyone with a smartphone or basic software. With freely available applications, anyone can create realistic fake visuals within minutes. These images are often used to harass shame or blackmail individuals by misusing their identity. From a legal perspective, this is not always easy to categorise. A Deepfake image may not always be obscene, yet it can still violate a person’s dignity and reputation. The emotional impact on victims especially students can be severe, leading to anxiety, withdrawal from social spaces and fear of further digital abuse.
India does not have a specific law dealing exclusively with deepfakes. Instead, victims must rely on existing legal provisions that indirectly address such conduct. The Information Technology Act, 2000 is often the first point of reference. Provisions relating to violation of privacy, identity misuse and publication of obscene or sexually explicit content are used to address deepfake related offences. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 also place an obligation on social media platforms to remove unlawful content once it is reported. However, these rules operate reactively and depend heavily on timely reporting by victims. The recently introduced Bharatiya Nyaya Sanhita, 2023, which replaces the Indian Penal Code, contains provisions dealing with Impersonation, sexual harassment and acts causing harm to reputation. While these provisions can be applied to deepfake abuse, they were not drafted with AI-Generated content in mind, leading to interpretational gaps.
Another relevant legislation is the Digital Personal Data Protection Act, 2023. Since facial images constitute personal data, using a person’s photograph to create AI-Generated content without consent may amount to a data protection violation. However, the Act primarily regulates data fiduciaries and platforms, and offers limited direct remedies against anonymous individuals creating deepfakes. In case involving minors, the POCSO Act, 2012 plays a crucial role. Any sexually explicit AI-generated image involving a child is punishable regardless of whether the image is real or fabricated. This provides strong protection, but only in limited circumstances.
In an age where images can be created without truth or consent, the law must adapt quickly and decisively. Stronger safeguards for identity, privacy, and dignity are essential, along with greater judicial caution in evaluating digital evidence. Social media platforms must also be held to higher standards of accountability. A practical and necessary step forward is to mandate visible watermarks or disclosures on AI-generated content, ensuring transparency and preventing public deception. Ultimately, the law must step in to protect truth, fairness and individual rights.

Leave a Reply

Your email address will not be published. Required fields are marked *