Loading stock data...

Unfiltered AI or Digital Harm? How Grok Is Fueling a Rise in Non-Consensual Deepfake Pornography

By: Adamu Garba

January 9, 2026

4 minute read

Elon Musk, the CEO of Tesla and owner of X, has unveiled his latest creation, an artificial intelligence chatbot named “Grok” with a penchant for sarcasm and a playful sense of humour.

A quiet but dangerous crisis is spreading across social media, driven by generative artificial intelligence and exploited by users who understand how to bypass ethical safeguards.

At the centre of the controversy is Grok, the chatbot developed by xAI, founded by Elon Musk. Marketed as an “unfiltered” and more permissive alternative to rival AI tools, Grok has increasingly been linked to the creation of non-consensual deepfake pornography (NCDP).

The process is disturbingly simple. A user uploads an ordinary image and prompts the AI to “undress” the subject. The output is a sexualised image generated without consent. Victims range from celebrities and influencers to private individuals and, in some reported cases, minors.

This is no fringe behaviour. It is happening at scale.

The issue gained renewed attention after Nigerian influencer and reality TV star Anita Natacha Akide, popularly known as Tacha, publicly addressed Grok on X. She clearly stated that she did not consent to any alteration, remixing, or manipulation of her photos or videos.

Despite the explicit notice, other users quickly demonstrated that Grok could still be prompted to manipulate her images. The episode exposed a fundamental weakness in AI governance: consent statements are ineffective when platforms lack enforceable technical safeguards.

Beyond the individual case, it raised deeper legal and ethical questions about responsibility, platform liability, and user abuse of generative AI systems.

Legal Perspective: “A Digital Epidemic”

To understand the legal implications, insights were provided by Senator Ihenyen, a technology lawyer and AI enthusiast, and Lead Partner at Infusion Lawyers.

He described the situation as “a digital epidemic,” warning that generative AI is being weaponised by bad actors who know how to exploit loosely restricted systems. According to him, the harm caused by non-consensual deepfakes is invasive, predatory, and psychologically damaging.

Crucially, he rejects the idea that AI exists in a legal vacuum.

Nigeria’s Legal Shield Against AI Abuse

Although Nigeria does not yet have a standalone AI Act, victims are not without protection. Ihenyen points to what he describes as a multi-layered legal shield.

At the centre is the Nigeria Data Protection Act 2023. Under the Act, a person’s face, voice, and likeness are classified as personal data. When AI systems process this information, they are subject to strict compliance rules.

Creating sexualised deepfakes involves the processing of sensitive personal data, which requires explicit consent. Without consent, both platform operators and facilitators may be exposed to legal liability.

Victims can file complaints with the Nigeria Data Protection Commission, where sanctions may include remedial fees of up to ₦10 million or two per cent of a company’s annual gross revenue, an amount significant enough to attract the attention of global platforms.

Criminal Liability and Child Protection

The users creating these images are not shielded either. Under Nigeria’s Cybercrimes Act (as amended in 2024), using AI to harass, humiliate, or sexually exploit someone may constitute cyberstalking or identity theft.

When minors are involved, the law is uncompromising. AI-generated child sexual abuse material is treated the same as physical imagery. Novelty, experimentation, or humour offer no defence. It is a serious criminal offence.

Practical Steps for Victims

For victims, navigating the legal and technical response can be overwhelming. Ihenyen recommends a structured approach:

  1. Formal takedown notices
    Under Nigeria’s NITDA Code of Practice, platforms with local representation must act promptly once notified. Failure to do so may remove safe-harbour protections.
  2. Technology-based protection
    Tools such as StopNCII allow victims to create a digital fingerprint of abusive content, helping platforms block redistribution without repeated uploads.
  3. Regulatory escalation
    Reporting abuse to regulators, not just platforms, can trigger enforcement actions, including orders to disable consistently abused AI features.

A Cross-Border Problem With Regional Solutions

Many perpetrators operate outside Nigeria, but this is no longer an insurmountable barrier. The Malabo Convention, which entered into force in 2023, enables cross-border cooperation among African states, allowing law enforcement agencies to trace and prosecute offenders regardless of location.

Unfiltered” Is Not a Legal Defence

xAI has framed Grok’s permissive design as a commitment to openness. From a legal standpoint, however, Ihenyen argues that “unfiltered” is not a defence; it is a risk.

Openness cannot excuse harm, illegality, or the failure to protect users from abuse. As regulators, courts, and lawmakers respond, the Grok controversy may become a defining test case for how far AI companies can go before accountability catches up.

Recent News

Leave a Reply

Your email address will not be published. Required fields are marked *

Category

Feature Posts

If you’d like to get featured on our Entrepreneur Spotlight, click here to share your startup story with us.

Africa Innovation Watch Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.

Join the community now!