Generative AI Makes Social Engineering More Dangerous-and Harder to Detect | IBM
Stephanie Carruthers
Chief People Hacker for IBM X-Force Red
“If there’s one job that generative AI can’t steal, it’s con artist.”
That’s how Stephanie Carruthers, IBM’s Global Lead of Cyber Range and Chief People Hacker, recalls feeling back in 2022. ChatGPT had recently brought generative artificial intelligence into the public consciousness. Its combination of eerily human language skills and deep knowledge base had many wondering how it might change the world.
And how it might change their jobs.
“When it was first introduced, people kept asking me: ‘Are you scared AI is going to take your job?’” Carruthers says. “I thought it couldn’t. We would have to get to the point where AI could really understand a person and build a custom campaign against them before that happened.”
As part of IBM® X-Force®, Carruthers runs mock social engineering schemes and cyberattacks to help companies strengthen their defenses against the real thing. Early generative AI models could cook up some fairly generic phishing scams, but they couldn’t do the sophisticated attacks that cause serious damage. Those schemes require deep research, careful planning and highly targeted pretexts.
But a lot can happen in two and a half years. Today, many large language models (LLMs) can search the web in real time. AI agents, capable of autonomously designing workflows and performing tasks, can take it a step further by using the information they uncover to inform their actions.
It no longer seems like hyperbole to imagine an AI-based bot that can perfectly tailor social engineering attacks to specific individuals. All it needs is a threat actor to set it in motion.
“We’ve reached the point where I am concerned,” Carruthers says. “With very few prompts, an AI model can write a phishing message meant just for me. That’s terrifying.”
According to the 2025 IBM X-Force Threat Intelligence Index, threat actors today are, on average, pursuing bigger, broader campaigns than they have in the past. This development is partly a matter of changing tactics, as many attackers have shifted their focus to supply-chain attacks that affect many victims at once.
But it is also a matter of changing tools. Many attackers have adopted generative AI like an intern or assistant, using it to build websites, generate malicious code and even write phishing emails. In this way, AI helps threat actors carry out more attacks in less time.
“The AI models are really helping attackers clean up their messages,” Carruthers says. “Making them more succinct, making them more urgent—making them into something that more people fall for.”
Carruthers points out that bad grammar and awkward turns of phrase have long been among the most common red flags in phishing attempts. Cybercriminals tend not to be assiduous with spellcheck, and they’re often writing in second and third languages, leading to a higher volume of errors overall.
But generative AI tools can generate technically perfect prose in virtually all major world languages, concealing some of the most obvious social-engineering tells and fooling more victims.
AI can also write those messages much faster than a person can. Carruthers and the X-Force team did some experiments and found that generative AI can write an effective phishing email in five minutes. For a team of humans to write a comparable message, it takes about 16 hours, with deep research on targets accounting for much of that time.
Consider, too, that deepfake technology allows AI models to create fake images, audio and even video calls, lending further credibility to their schemes.
In 2024 alone, Americans lost USD 12.5 billion to phishing attacks and other fraud. That number might rise as more scammers use generative AI to create more convincing phishing messages, in more languages, in less time.
And with the arrival of AI agents, fraudsters can scale their operations even further.
Research often makes the difference between a failed cyberattack and a successful one. By researching their targets—organizations or individuals—threat actors can craft perfectly tailored plans, draft stories that expertly tug the right heartstrings and develop malware that pries at the right vulnerabilities.
And attackers can find much of the information they need online.
“You can learn so much about an individual just by looking at their social media, at the company's website, anywhere on the open web, really,” Carruthers says. “There’s so much information that people put in blog posts, press releases, the media and even job posts.”
Job posts are a good example of how attackers can turn seemingly innocuous information against their victims.
“By reading your job post, I might learn what your tech stack looks like and who your vendors are,” Carruthers explains. “Now, I can customize my malware to your environment. I know which vendors I can pretend to be from.”
Experts such as Carruthers worry that with AI agents, which can design workflows and use tools to achieve complex objectives, attackers might automate more than phishing emails and fake websites.
Attackers can theoretically use AI agents to collect information, analyze it, formulate a plan of attack and generate scam messages and deepfakes for use in the attack.
That process is much bigger than generating variations of the same phishing message with different writing styles for different targets. It’s a scaled-up form of supertargeted spear phishing, widely considered the most effective form of social engineering.
Cybersecurity experts have not yet detected malicious AI agents in the wild in any meaningful numbers, but it might just be a matter of time. A recent story from MIT Technology Review quotes Mark Stockley of Malwarebytes as saying, “I think ultimately we’re going to live in a world where the majority of cyberattacks are carried out by agents. It’s really only a question of how quickly we get there.”
In the age of generative AI, many traditionally reliable defenses against social engineering attacks no longer work.
“The first thing that we're teaching our employees is to look out for bad grammar, typos, those kinds of things,” Carruthers says. “That's not really a thing anymore for sophisticated attackers who are using AI.”
If these stylistic red flags no longer work, one option is to shift the focus of security awareness training to more substantive discussions of social engineering tactics.
Social engineers rely heavily on impersonation, misinformation and emotional manipulation, as Carruthers has covered before. AI-powered scams might make fewer spelling mistakes, but they still rely on the same tropes and patterns as classic social engineering attacks. If employees learn to spot these telltale signs, they can thwart more cybercrimes.
Social engineers prey on human psychology, such as curiosity, fear, the desire to help and the desire to fit in. Scammers calibrate their messages to ignite these emotions: “I need you to send this money right now, or something extremely bad is going to happen.”
Most legitimate workplace interactions carry much less emotional charge. Jobs can be stressful, coworkers can be passive-aggressive and bosses can be demanding. But most people try to retain at least some modicum of politeness.
Requests—even urgent ones—tend to take a more even-keeled tone: “Hey, can you make sure that this invoice gets paid today? We’re late due to a clerical error, and I don’t want to tick off the vendor.”
A significant request, delivered with intense emotion, should be a sign to stop and think.
Social engineers are storytellers, and they tend to stick to a few tried-and-true plot hooks. Some of the most common include:
I’m with a service or brand you trust, and we have an amazing offer for you. Act now to claim it.
That said, the craftiest attackers personalize their stories as much as possible. Instead of stopping at a high-level overview, Carruthers recommends that organizations align their security training to specifically address the kinds of cyber threats their employees are most likely to face.
“Reevaluate what your security awareness training looks like in the context of what attacks are actually happening at your organization today,” Carruthers says. “Are you getting specific types of scam phone calls? Incorporate those calls into your training.”
Along with reworking training content, Carruthers recommends delivering training more often. Doing so can help the lessons stick and keep valuable tips fresh in people’s minds—making it more likely that they actually use the security measures they learn.
“Employees are typically attacked first to compromise an entire organization,” Carruthers says. “If we're giving them one hour of training once a year, is that really enough?”
People can catch more attacks in progress by looking for the red flags. But they can stop some attacks from happening altogether by limiting what they post.
“It’s really important for both individuals and organizations to be cognizant about what you put online,” Carruthers says.
In social engineering terms, the power of AI technology and LLMs comes from their ability to dig up and analyze large amounts of information on targets. If there’s no such information to be found, AI tools can’t craft tailored attacks, making it harder for them to fool victims, no matter how clean their prose is.
Carruthers points out that “avoiding oversharing” is common advice, but this tip is often interpreted as “Don’t post confidential information online.” Yet scammers can even use some nonsensitive information to make their attacks more convincing.
“A lot of job posts lay out a little bit of an org chart: ‘This role reports to this role, and has these roles reporting to it,’” Carruthers explains. “That’s valuable information. I have a sense of what your organization looks like. I know what titles to use and what role I should pretend to have.”
While individuals can maintain strict privacy settings on their social media accounts, that approach isn’t very practical for organizations. But businesses can be more circumspect about what they put out there, such as blurring employee badges in pictures.
“Most people don’t think it’s a big deal to find a picture of an employee badge,” Carruthers says. “From a social engineering aspect, I can now replicate the look of that badge, which makes it a lot easier to get into a building I don’t belong in.”