Deepfakes Can Be a Crime - Teaching AI Literacy Can Prevent It
Imagine you're 14 years old, just a few years younger than me. Like any other kid your age, you go to school, play sports, and hang out with your friends. The most stressful thing you have to worry about is that big essay due next week—until you wake up one morning to texts from your friends all asking the same thing:
Have you seen the naked picture of you on Snapchat?
And just like that, your world is turned upside down.
This happened to Elliston Berry two years ago. She had never taken a nude photo of herself, but a classmate had used artificial intelligence to "nudify" a photo from her Instagram. As her mom said, "My daughter's innocence was shattered and her eyes were opened to the reality of how cruel a person can be."
With more and more kids using AI for school, hobbies, and entertainment, the technology is redefining how my generation learns and interacts with one another across the country. But as AI becomes easier to use, it also becomes easier to abuse. For proof, look no further than the rise in deepfakes: AI-generated images, often explicit, that look just like real people.
While creating this type of content used to require technical expertise and dedicated tools, today anyone with an internet connection can use AI to turn any photo—yearbook pictures, selfies, photos from social media—into a nude image. In fact, there are a disturbing number of apps available to do just this. They're referred to as "nudification" apps, and they're frequently advertised even to minors on mainstream platforms like those owned by Meta, which are responsible for 90% of web traffic to these disturbing tools. In response, Congress recently passed the TAKE IT DOWN Act, a landmark law that helps victims remove nonconsensual explicit content (both authentic and AI-generated) from social media platforms. This legislation is a critical step toward protecting young people, but it's also indicative of how severe the problem has become.
While nonconsensual deepfakes of celebrities like Taylor Swift have made headlines and have even been used in ads for these apps, this problem has more far-reaching impacts for regular people without the resources to fight these gross invasions of privacy online and in court. Earlier this year, a disgruntled school staff member in Maryland allegedly circulated an AI-generated deepfake of the principal spewing racial slurs. This wasn't just a prank—it was an AI-enabled attack on a colleague's career and reputation. In another incident, New Jersey high school students used AI to create and post nude images of their classmates. In New York, boys created AI pornography of a girl in their class. Similar incidents have occured in Texas, Louisiana, Washington, and more—and tragically, more than half of the victims were under 18. The weaponization of AI is a new kind of violence, and it doesn't require the victim to have ever taken or shared an explicit photo. The saying "if you don't want a photo leaked, don't take one" no longer applies. Anyone who has ever posted a photo of themselves is a potential victim.
By passing the TAKE IT DOWN Act, Congress has taken an important first step toward addressing this crisis. The law makes it a crime to post deepfakes without consent and requires social media companies to remove these images from their platforms within 48 hours. In today's digital world, where images can go viral in seconds and live online forever, every second matters, and by requiring companies to take action, the law provides a mechanism for victims to take back control.
Under the law, minors who create highly realistic, nonconsensual images of other minors —whether through AI deepfakes or traditional photo editing tools like Photoshop—that depict nudity or simulate sexual content, could be subject to the same penalties as adult offenders, which could even include charges for the creation, possession, or distribution of child sexual abuse material, depending on the nature of the content.
Many young people may not realize that "just messing around" with AI tools to generate fake nudes of classmates or others can cross legal lines, with life-altering consequences. In addition to protecting kids from AI-enabled privacy violations, we also need to make sure they understand the ethical, legal, and emotional consequences of irresponsible AI use; that's where AI literacy comes in. Teaching young people how these tools work, what they're capable of, and the very real consequences of misuse is essential to prevention. Without that foundation, they're left to navigate powerful technology with little understanding of the risks.
While some states have started to integrate AI literacy into school curricula, it's far from universal. That means the responsibility can't fall solely on educators—all of us have a role to play. We should be talking openly about these tools, how they're used, and how they can cause harm. The passage of the TAKE IT DOWN Act is an important step, but it's just that: a step. Encouraging safe AI use isn't just about regulating platforms or punishing offenders—it's about making sure kids have the knowledge they need to stay safe in a digital age where the stakes are higher than ever.
Brooke Lieberman is a Teen Advocate with Common Sense Media, where she gives a youth perspective on digital well-being and online safety, as well as being part of the Teen Press Corps. Brooke is a senior in high school and previously served as the Student Member on the Frederick County Public Schools Board of Education & the Chair of the Inaugural Frederick Youth Council. Brooke is passionate about empowering youth voices & shaping a safer, more equitable digital future. Brooke is enrolled at the University of Wisconsin-Madison.