The FBI has issued a public service announcement urging smartphone users to create a secret code word to combat AI-generated scams. This recommendation comes as reports reveal an increase in cyber fraud leveraging generative AI to enhance deceitful tactics. Security experts say these tools can manipulate communication tactics, making it difficult to discern genuine messages from forgeries. As a precaution, the FBI has also advised users to stop using Rich Communication Services (RCS) for cross-platform texts, as these do not offer end-to-end encryption.
FBI warns smartphone users of rising AI-generated scamsIn public service alert number I-120324-PSA, the FBI reported that cyber criminals increasingly utilize generative AI in phishing schemes. These advancements allow attackers to produce realistic emails and messages, thereby reducing the chance that potential victims will recognize them as fraudulent. In one example, AI can correctly generate content that might ordinarily contain spelling or grammar mistakes, which were once telltale signs of scams. As a result, victims may become increasingly vulnerable to revealing sensitive information.
The FBI outlined several alarming ways generative AI can facilitate cyber attacks. These include generating photos that create a convincing identity, using images of celebrities to promote fraudulent activities, and generating audio clips that mimic loved ones requesting financial help. Additionally, AI technology is capable of producing real-time video chats featuring individuals who claim to be company executives or law enforcement personnel, further blurring the lines between reality and deception.
Law enforcement faces challenges with iPhones’ automatic rebooting
To safeguard against these threats, the FBI emphasizes the importance of verification. Users are advised to hang up if they receive suspicious calls and independently verify the caller’s identity by searching for and using verified contact information. Creating a secret word agreed upon by family members can serve as a protective measure against fraudulent emergency calls. This simple precaution can ensure that any dire requests for assistance can be validated.
Evolving reports indicate that generative AI is being increasingly utilized in diverse cyber scams, from tech support fraud to banking fraud. Investigators have noted a trend in AI-driven tactics pivoting towards manipulative behaviors in communications, especially via smartphones. This indicates a significant shift in the landscape of cybersecurity threats.
Genetic AI is also blurring the lines of authenticity across popular communication platforms. Due to the inherent vulnerabilities in systems such as RCS, Apple and Android users need to be particularly cautious when dealing with cross-platform text messages, which now lack guaranteed safe encryption. Consequently, utilizing encrypted messaging services, such as WhatsApp, has become more critical than ever.
Featured image credit: David Trinks