Meta’s Lawsuit Against ‘Nudify’ Apps is a Wake-Up Call for India

On June 12, Meta announced that it had filed a lawsuit against the Hong Kong-based entity behind CrushAI, a set of apps that use artificial intelligence to generate fake, non-consensual nude images. These so-called “nudify” tools are part of a disturbing new frontier in technology-facilitated gender-based violence.

As the Alliance for Cyber Trust and Safety (ACTS), we strongly support this legal action. Not only does it reaffirm Meta’s commitment to fighting non-consensual intimate imagery, both real and AI-generated, but it also sends an essential message to developers, advertisers, and platforms: abuse dressed up as innovation will not go unchecked.

But in India, this must serve as more than just a headline. It is a wake-up call.

 

A Growing, Unseen Crisis

“Nudify” apps are not new. What’s new is the speed, sophistication, and scale at which they now operate. These tools, often framed as entertainment or prank apps, let users “undress” any image of a person, usually a woman, using AI. The resulting fakes are disturbingly realistic and deeply violating.

In India, the normalisation of such digital abuse is already rampant. We’ve seen deepfake content shared in WhatsApp groups, AI-edited images of schoolgirls circulating on Telegram, and women reporting cases of image-based blackmail with nowhere to turn.

A May 2024 McAfee survey found that over 75% of Indians had encountered deepfake content in the past year, and nearly 1 in 4 (22%) reported having recently come across a political deepfake that they later discovered to be fake. India now also ranks as the world’s second-largest consumer of AI-generated deepfake pornography, with 24.6 million site visits recorded in just 12 months between December 2023 and November 2024.

And yet, there is no specific legal framework in India to address AI-enabled intimate image abuse. Survivors are often forced to rely on loosely defined laws under the IT Act or IPC sections, none of which were designed for this new kind of violence.

 

Shame, Silence, and the Gendered Internet

The gendered nature of this abuse cannot be overstated. Women and girls, already navigating a hostile digital space, are being pushed further to the margins. And because this violence is both invisible and tech-heavy, it often goes unreported. Victims don’t know what the app is called, where the image is hosted, or even how to prove it’s fake.

In many cases, the burden falls on the survivor to report, explain, and prove harm, while the perpetrators remain anonymous, the platforms unaccountable, and the content nearly impossible to remove.

 

What Needs to Change

Meta’s decision to not only remove the ads and apps but also share signals with other tech platforms via the Tech Coalition is a critical step in cross-industry cooperation. But it cannot stop there.

In India, we need urgent, multi-stakeholder action:

  • Remove ‘nudify’ apps from app stores: Platforms must urgently delist these AI-enabled abuse tools from app marketplaces, just as Meta has acted against their ads.
  • Address discoverability: Search engines should de-index violating apps and sites so perpetrators can’t easily find or share them.
  • Accelerate takedowns and enforcement: Legal and platform action must be faster, as delays only allow more harm. Proactive detection and stronger AI tools are essential.
  • Widen information sharing across the industry: Signals about violating URLs, networks, and advertising behaviour should be systematically shared across platforms via trusted mechanisms like Tech Coalition’s Lantern.
  • Invest in public education: Users must be empowered to recognise abuse tools and know how to report violations. Awareness campaigns and accessible reporting mechanisms are crucial.
  • Strengthen government and platform coordination: Governments should move beyond reactive measures and build capacity to pre-empt such harms.
  • Create an independent, multi-stakeholder oversight body: A neutral global body, comprising civil society, survivors, tech companies, and regulators, should oversee harms such as AI-generated intimate imagery and ensure timely, transparent redressal.

At ACTS, we are working with government, industry, and civil society to build safe digital ecosystems grounded in trust and accountability. But we need platforms to step up, lawmakers to pay attention, and society to treat these harms as real, because they are.

If left unchecked, AI-powered tools like these will continue to strip away the dignity, autonomy, and safety of millions. Meta’s action is a start. Let’s make sure it’s not the end.

If you see these apps, report them. If you see a fake, don’t share it. Flag it. If you know a survivor, stand with them. Silence feeds shame; solidarity breaks it.

Because no machine should ever undress us of our right to exist with dignity.

(Note: The cover image featured in this publication is an artificial intelligence (AI)-generated representation and does not depict any real individual. It has been included solely for illustrative purposes, to underscore the increasing sophistication of AI and deepfake technologies. No actual person or identity is associated with, or represented by, this image. Readers are advised to exercise awareness regarding the broader implications and potential uses of AI-generated content in contemporary contexts.)

Picture of Jyoti Vadehra

Jyoti Vadehra

Head, Digital Safety & Online Wellbeing Centre for Social Research, India

Share

LinkedIn
X
WhatsApp