Navigation

© Zeal News Africa

Is AI the New Con Artist? Unmasking Social Engineering 2.0

Published 1 month ago4 minute read

Once the domain of elite spies and con artists, social engineering is now in the hands of anyone with an internet connection, and AI is the accomplice.

Supercharged by generative tools and deepfake technology, today’s social engineering attacks are no longer sloppy phishing attempts. They’re targeted, psychologically precise, and frighteningly scalable.

Welcome to Social Engineering 2.0, where the manipulators don’t need to know you personally. Their AI already does.

Social engineering is effective because it bypasses firewalls and technical protections. It violates people’s trust. These scams have generally relied on generic hooks and easy deception.

, states that AI is automating social engineering, eliminating traditional phishing markers like spelling errors and bad grammar. ”AI can mimic writing styles, generate emotionally resonant messages, and even recreate voices or faces—all within minutes,” she explains.

The result? Cybercriminals now wield the capabilities of psychological profilers. By scraping publicly available data from social media to company bios. AI can construct detailed personal dossiers. “Instead of one-size-fits-all lures, AI enables criminals to create bespoke attacks,” Collard explains. “It’s like giving every scammer access to their digital intelligence agency.”

One of the most chilling evolutions of AI-powered deception is the rise of deepfakes—synthetic video and audio designed to impersonate real people. “There are documented cases where AI-generated voices have been used to impersonate CEOs and trick staff into wiring millions,” notes Collard.

In South Africa, a recent deepfake video circulating on WhatsApp featured a convincingly faked endorsement by FSCA Commissioner Unathi Kamlana promoting a fraudulent trading platform. Nedbank had to publicly distance itself from the scam.

“We’ve seen deepfakes used in romance scams, political manipulation, and even extortion,” says Collard. One emerging tactic involves simulating a child’s voice to convince a parent they’ve been kidnapped—complete with background noise, sobs, and a fake abductor demanding money.

“It’s not just deception anymore,” Collard warns. “It’s psychological manipulation at scale.”

One cybercrime group exemplifying this threat is Scattered Spider. Known for its fluency in English and deep understanding of Western corporate culture, this group specializes in highly convincing social engineering campaigns. “What makes them so effective,” notes Collard, “is their ability to sound legitimate, form quick rapport, and exploit internal processes, often tricking IT staff or help-desk agents.”

Their human-centric approach, amplified by AI tools, such as using audio deepfakes to spoof victims’ voices for obtaining initial access, shows how the combination of cultural familiarity, psychological insight, and automation is redefining what cyber threats look like. It’s not just about technical access—it’s about trust, timing, and manipulation.

What once required skilled con artists days or weeks of interaction establishing trust, crafting believable pretexts, and subtly nudging behavior—can now be done by AI in the blink of an eye. “AI has industrialized the tactics of social engineering,” says Collard. “It can perform psychological profiling, identify emotional triggers, and deliver personalized manipulation with unprecedented speed.”

The classic stages—reconnaissance, pretexting, and rapport building—are now automated, scalable, and tireless. Unlike human attackers, AI doesn’t get sloppy or fatigued; it learns, adapts, and improves with every interaction.

The biggest shift? “No one has to be a high-value target anymore,” Collard explains. “A receptionist, an HR intern, or a help-desk agent—all may hold the keys to the kingdom. It’s not about who you are—it’s about what access you have.”

In this new terrain, technical solutions alone won’t cut it. “Awareness has to go beyond ‘don’t click the link,’” says Collard. She advocates for building ‘digital mindfulness’ and ‘cognitive’ resilience’—the ability to pause, interrogate context, and resist emotional triggers.

This means

Collard recommends unconventional tactics, too. “Ask HR interviewees to place their hand in front of their face during video calls—it can help spot deepfakes in hiring scams,” she says. Families and teams should also consider pre-agreed code words or secrets for emergency communications, in case AI-generated voices impersonate loved ones.

While attackers now have AI tools, so too do defenders. Behavioral analytics, real-time content scanning, and anomaly detection systems are evolving rapidly. But Collard warns, “Technology will never replace critical thinking. The organizations that win will be the ones combining human insight with machine precision.”

And with AI lures growing more persuasive, the question is no longer whether you’ll be targeted but whether you’ll be prepared. “This is a race,” Collard concludes. “But I remain hopeful. If we invest in education, in critical thinking and digital mindfulness, and in the discipline of questioning what we see and hear, we’ll have a fighting chance.”

Origin:
publisher logo
IT News Africa | Business Technology, Telecoms and Startup News
Loading...
Loading...
Loading...

You may also like...