Loading stock data...

xAI Implements Stricter Grok Image Controls Amid Deepfake Concerns

By: Cynthia Okafor

January 15, 2026

3 minute read

Elon Musk’s xAI tightens Grok’s image editing tools to curb non-consensual deepfake content, introducing geoblocking, subscription limits, and enhanced safeguards globally.

Elon Musk’s AI firm, xAI, has rolled out new restrictions on Grok’s image generation and editing features in response to growing concern over misuse on the X platform.

The update comes after backlash from regulators, human rights organisations, and media worldwide, including the U.S., Europe, and Africa, over Grok being used to create non-consensual deepfake pornography of real people.

Authorities Highlight Abuse Risks

Investigations by journalists and authorities have documented users employing Grok to manipulate photos of real individuals, including minors, into sexually explicit or revealing imagery without consent.

U.S. authorities reported that over half of the tens of thousands of images generated during a recent holiday period depicted people in minimal clothing, some appearing underage, amplifying pressure on xAI to act.

New Safeguards Enforced Globally

xAI’s safety team confirmed that all users are now blocked from editing images of real people into revealing attire, like bikinis or underwear, in jurisdictions where such content is illegal.

These measures apply to all users, including premium subscribers, signalling a shift from Grok’s previous, less restrictive approach. The company says the changes aim to reduce misuse while complying with local laws.

Grok Features Still Available, but Restricted

According to X’s official safety update, Grok’s image creation and editing remain active, but with new limitations:

  • Restricted to paid X subscribers
  • Geoblocking in regions where such content is prohibited
  • Emphasis on accountability and traceability for any misuse

xAI says these controls will help identify bad actors and prevent large-scale abuse.

Critics Claim Measures Are Insufficient

Despite the updates, watchdogs and regulators argue that paywalling harmful features does not fully eliminate the risk, only limits access.

Independent media tests suggest some safeguards can still be bypassed, and certain Grok platforms, including the standalone app and web portal, may not yet enforce all restrictions.

Rising Global Regulatory Scrutiny

The controversy has intensified international oversight of generative AI. Authorities in California, the European Union, and parts of Asia have flagged concerns over insufficient safeguards.

The European Commission will review xAI’s new measures to ensure they meet digital safety standards, while some regulators are considering legal action against xAI and X.

Responsible AI Remains a Global Challenge

The episode underscores the broader debate on how to balance AI innovation with protections for privacy, dignity, and human rights.

Experts have warned that Grok’s original “unfiltered” design made it particularly vulnerable to misuse, turning generative AI into a potential social and legal hazard if safeguards are inadequate.

What’s Next for xAI and Grok

xAI says it will continue refining Grok to address emerging risks and frame safety as a collaborative effort with regulators, users, and partners.

However, regulators and advocacy groups emphasise that this is just the beginning of a larger legal and ethical challenge surrounding generative AI. The future of Grok and similar tools will depend on whether companies can prioritise safety and accountability alongside innovation.

Recent News

Leave a Reply

Your email address will not be published. Required fields are marked *

Category

Feature Posts

If you’d like to get featured on our Entrepreneur Spotlight, click here to share your startup story with us.

Africa Innovation Watch Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.

Join the community now!