Post-Ghibli trend: OpenAI's ChatGPT now creates Fake IDs, sparking security concerns
This development has alarmed many, especially after the "Ghibli" trend of AI image editing
Last updated:

OpenAI’s ChatGPT has increasingly become a topic of concern due to its expanding capabilities to generate realistic images and documents. The latest issue to surface involves the chatbot’s ability to create fake identification cards, such as Aadhaar and PAN (Permanent Account Number) cards, when given specific prompts.
This new development has triggered alarm, especially following the "Ghibli" trend, which saw users uploading personal images for AI editing. The issue has escalated as AI-generated mock-ups of India’s Aadhaar cards began appearing on social media platforms.
A disturbing trend has emerged, with users creating convincing replicas of Aadhaar cards that include fabricated names, faces, and even QR codes. Some of these images have included public figures like OpenAI CEO Sam Altman and Tesla CEO Elon Musk, further blurring the lines between real and fake. The striking resemblance of these fake cards to the authentic ones has raised serious concerns over how easily AI can be used to forge official documents.
Historically, creating convincing fake government-issued IDs has been a challenge for cybercriminals. However, GPT-4’s capabilities have significantly simplified this task.
Now, social media users are sharing these AI-generated fake IDs, which are disturbingly close to legitimate documents. Although these fake IDs might lack certain security features—such as QR codes, microtext, and issue dates—they can still deceive unsuspecting individuals, making them vulnerable to scams and fraud.
In addition to Aadhaar cards, ChatGPT is also being used to generate fake PAN cards, further expanding the potential for fraud. The ability of AI to replicate essential identification documents for both public figures and fictional characters underscores the growing risks posed by this technology.
The advancement of AI, particularly in image and document generation, has introduced new security challenges. While OpenAI has implemented safeguards to prevent the creation of harmful content, users are finding ways to bypass these restrictions. This development underscores the necessity for more stringent regulations to mitigate the potential misuse of AI-generated materials. OpenAI has also recognized the unique risks associated with its latest models, noting that the autoregressive architecture of ChatGPT presents vulnerabilities not seen in earlier iterations like DALL-E 3.
Even if AI-produced identification documents lack the necessary security features for official verification, their lifelike appearance can easily deceive individuals. This creates opportunities for fraudulent activities and scams. Although OpenAI has established content restrictions for sensitive materials, the proliferation of fake IDs demonstrates the broader difficulty in controlling and preventing the misuse of AI technology.
The continuous evolution of AI demands a parallel advancement in AI governance. The capacity to generate highly realistic documents and images presents substantial risks, emphasizing the necessity for regulatory frameworks that keep pace with technological progress. The emergence of fake IDs generated by tools like ChatGPT serves as a critical example of the vulnerabilities that AI introduces, affecting both individual security and societal stability.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox