The FBI warned that cybercriminals using AI-generated audio deepfakes to target U.S. officials in voice phishing attacks that started in April.
This warning is part of a public service announcement issued on Thursday that also provides mitigation measures to help the public spot and block attacks using audio deepfakes (also known as voice deepfakes).
“Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts. If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI warned.
“The malicious actors have sent text messages and AI-generated voice messages — techniques known as smishing and vishing, respectively — that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts.”
The attackers can gain access to the accounts of U.S. officials by sending malicious links disguised as links designed to move the discussion to another messaging platform.
By compromising their accounts, the threat actors can gain access to other government officials’ contact information. Next, they can use social engineering to impersonate the compromised U.S. officials to steal further sensitive information and trick targeted contacts into transferring funds.
Today’s PSA follows a March 2021 FBI Private Industry Notification (PIN) [PDF] warning that deepfakes (including AI-generated or manipulated audio, text, images, or video) would likely be widely employed in “cyber and foreign influence operations” after becoming increasingly sophisticated.
One year later, Europol cautioned that deepfakes could soon become a tool that cybercriminal groups may routinely use in CEO fraud, non-consensual pornography creation, and evidence tampering.
The U.S. Department of Health and Human Services (HHS) also warned in April 2024 that cybercriminals were targeting IT help desks in social engineering attacks using AI voice cloning to deceive targets.
Later that month, LastPass revealed that unknown attackers used deepfake audio to impersonate Karim Toubba, the company’s Chief Executive Officer, in a voice phishing attack targeting one of its employees.
Based on an analysis of 14M malicious actions, discover the top 10 MITRE ATT&CK techniques behind 93% of attacks and how to defend against them.