The National Information Technology Development Agency (NITDA) has issued a significant cybersecurity alert cautioning Nigerians about fresh security weaknesses in ChatGPT that may lead to data leaks, unauthorized actions, and manipulated outputs.
The advisory, released through CERRT.NG, comes amid increasing reliance on AI tools for research, business operations, and government work, and rising concerns about the risks of AI interacting with unsafe online content.
Researchers Uncover Seven High-Risk Flaws in GPT-4o and GPT-5
NITDA confirmed that cybersecurity researchers discovered seven critical vulnerabilities affecting OpenAI’s GPT-4o and GPT-5 models. These flaws make the systems susceptible to indirect prompt injection, a method that allows attackers to secretly embed malicious instructions in digital content.
The agency explained that harmful prompts can be inserted into webpages, comments, images, or specially crafted URLs. When ChatGPT processes such content, whether through browsing, summarizing, or searching, it may unknowingly follow hidden instructions.
“Hidden instructions in webpages, comments, or crafted URLs can cause ChatGPT to execute unintended commands during routine browsing or summarization,” the advisory warned.
Some vulnerabilities enable attackers to bypass safety controls by disguising malicious code under legitimate-looking domains, while others exploit markdown rendering weaknesses, making harmful prompts invisible to the user.
A more severe issue involves LLM memory poisoning, which can cause ChatGPT to store malicious instructions that influence future responses.
Although OpenAI has resolved some issues, NITDA stressed that current AI models still find it difficult to reliably tell the difference between real user intent and concealed malicious instructions.
How These Vulnerabilities Could Affect Nigerians and Organisations
NITDA highlighted several potential consequences of these flaws, including:
- Unauthorized actions performed by ChatGPT
- Exposure of sensitive user data
- Manipulated or inaccurate outputs
- Persistent behavioural changes caused by memory poisoning
CERRT.NG emphasized that users may trigger such attacks without clicking anything, especially when AI automatically processes content containing hidden commands.
Recommended Safety Measures for Users and Institutions
To minimize risk, NITDA urged individuals, businesses, and government organizations to adopt the following precautions:
- Disable or limit browsing and summarisation features when dealing with untrusted websites
- Turn on browsing or memory only when absolutely necessary
- Ensure all GPT-4o and GPT-5 deployments are up to date with the latest security patches
- Be extra cautious when using ChatGPT for sensitive or confidential tasks
Context: NITDA’s Earlier Warning on eSIM Vulnerabilities
This update follows a previous national alert on a major eSIM security flaw impacting billions of smartphones, wearables, IoT devices, and tablets.
The vulnerability, linked to the GSMA TS 48 Generic Test Profile (v6.0 and below), could allow attackers to:
- Install rogue applets
- Extract or steal cryptographic keys
- Clone eSIM profiles
- Intercept communications
- Maintain long-term control of affected devices
NITDA warned that such exploitation could create hidden backdoors and widespread privacy risks.
As Nigeria’s public and private sectors increasingly adopt AI tools, NITDA’s latest advisory underscores the importance of strengthening cybersecurity awareness and adopting safe AI usage practices, especially when interacting with online content of uncertain origin.




