Log In

FBI Warns iPhone, Android Users-Do Not Reply To These Messages

Published 23 hours ago6 minute read

You have been warned — this nightmare is now real.

NurPhoto via Getty Images

Republished on May 18 with additional commentary, advice and resources on defending against these dangerous messages, where normal detection is impossible.

We were warned. Forget looking for telltale signs, the latest set of AI-fueled attacks are so sophisticated you need to check everything to ensure you’re not being attacked. In the last 24-hours, we have seen Gmail and Outlook users warned that malicious emails are now so “perfect" that they’re impossible to detect, and that calls which seem to come from people we know, could be a dangerous deception.

That’s the latest warning to come from the FBI, after the discovery of “an ongoing malicious text and voice messaging campaign.” This has used texts and voice messages purporting to come from “senior U.S. officials," tricking victims, many of who are also “current or former senior U.S. federal or state government officials and their contacts.”

The bureau’s warning is serious enough that you are now being told: “If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.” The goal of the attacks is to steal credentials through links that seem to be message related.

According to Cofense’s Max Gannon, “it is important to note that threat actors can also spoof known phone numbers of trusted organizations or people, adding an extra layer of deception to the attack. Threat actors are increasingly turning to AI to execute phishing attacks, making these scams more convincing and nearly indistinguishable."

ForbesGoogle Is Deleting All Your Location Data—Do Not Miss Deadline

The FBI’s advice is wider ranging than just this latest attack, and links back to its recent warnings on the proliferation of AI attacks.

All that said, the FBI acknowledges that “AI-generated content has advanced to the point that it is often difficult to identify.” Sometimes it will just come down to common sense. Is this a call I could reasonably expect, and am I being asked to do something that would advantage a cybercriminal or scammer. Can I deduce what their take might be. How can I hang up and call back using normal channels. How do I verify the caller.

ForbesApple’s iPhone Update—Why You Need To Change Your Messaging AppBy Zak Doffman

Ryan Sherstobitoff from SecurityScorecard told me “to mitigate these risks, individuals must adopt a heightened sense of skepticism towards unsolicited communications, especially those requesting sensitive information or urging immediate action.”

Often these texts, calls and voice messages lead to a link. This is the attack, which will phish for credentials or trick you into installing malware. “Do not click on any links in an email or text message until you independently confirm the sender’s identity," the bureau warns. And "never open an email attachment, click on links in messages, or download applications at the request of or from someone you have not verified.”

In the wake of the FBI’s latest warning, ESET’s Jake Moore told me “it’s vital people think with a clear head before responding to messages from unknown sources claiming to be someone they know. But with newer, impressive and evolving technology, it is understandable why people are quicker to let down their guard and assume that seeing is believing. Deepfake technology is now at an incredible level which can even produce flawless videos and audio clips cleverly designed to manipulate victims.”

Beyond Trust’s CTO, Marc Maiffret, told me the latest FBI warning flags the risk that “AI-driven impersonation attacks are rising, targeting both individuals and organizations.” To combat these escalating threats, Maiffret says, “requires human vigilance and strong identity security. Businesses should continue enforcing the principle of least privilege, identity infrastructure monitoring, and securing access to sensitive accounts, you limit what attackers can do—even with stolen credentials.”

ForbesHacking Disaster Warning—Delete All These Emails On Your PCBy Zak Doffman

Darktrace SVP Nicole Carignan told me “the fact that attackers are using generative AI to produce deepfake audio, imagery, text messages, and video is a growing concern, as attackers are increasingly using deepfakes to start sophisticated social engineering attacks.” While the FBI has generated headlines given the nature of the deepfakes and the targets, the greater risk — as also flagged by the bureau — is financial crime.

A new and perfectly timed report from Help Net Security warns “don’t assume anything is real just because it looks or sounds convincing… Remember the saying, seeing is believing? We can’t even say that anymore. As long as people rely on what they see and hear as evidence, these attacks will be both effective and difficult to detect."

With equally apt timing, Reality Defender had put out a new deepfake guide just 72 hours before the FBI issued its warning. “Deepfake threats targeting communications don’t behave like traditional cyberattacks… Instead, they exploit trust. A cloned voice can pass legacy voice biometric systems. A fake video call can impersonate a company executive with enough accuracy to trigger a wire transfer or password reset.”

Deepwatch CISO Chad Cragle has provided some useful pointers to help smartphone users stay safe, given "AI-powered impersonation attempts, whether via email, phone calls, text, or even deepfake video, are now harder than ever to distinguish from legitimate communication:

Moore’s advice is more straightforward: “To protect yourself from smishing scams and deepfake content avoid clicking on links in unexpected or suspicious text messages — especially those that create a sense of urgency, even when it looks or sounds like the real deal. Never share personal or financial information via text messages and always verify via trusted communication channels.”

ForbesGoogle Warns Android Phone Thieves—We Will Shut You DownBy Zak Doffman

Cybercriminals are now exploiting generative AI and deepfake technology “at an unprecedented scale to execute highly convincing impersonation scams,” Cragle warns. “From low-level attackers to sophisticated nation-state actors, adversaries are leveraging AI-generated voice cloning to manipulate victims over the phone and via text messages. AI-enhanced phishing emails are now so advanced that they mimic corporate language, making it difficult to rely on traditional red flags like poor grammar or generic formatting.” Which is exactly why the FBI has issued this stark a warning.

Six months ago, the FBI advised that “Criminals can use AI-generated audio to impersonate well-known, public figures or personal relations to elicit payments. Criminals generate short audio clips containing a loved one’s voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom. Criminals obtain access to bank accounts using AI-generated audio clips of individuals and impersonating them.” Exactly that we now see here.

As Maiffret says, “AI-based social engineering attacks highlight why identity is one of the most important domains for businesses to secure. Deepfakes, like these, are a great example of the need to treat identity as the new perimeter.”

Origin:
publisher logo
Forbes

Recommended Articles

Loading...

You may also like...