Technology moves fast, and so do fraudsters. Innovations in artificial intelligence (AI) have brought advancements in many areas, including medical research, manufacturing and agriculture, as well as in how we interact with technology. However, AI has also created opportunities for bad actors to manipulate what we see and hear for their own benefit. Let’s explore how some of these innovations are at the heart of devious schemes.
Online Chat: Fraudsters Are Impersonating Friends & Family with AI
Talking to friends online is great. You can interact with people all over the world! But how can you be sure that you’re actually talking with your friend? Unfortunately, scammers often use chat bots to engage with their victims, using compromised social media profiles
These chat bots are becoming increasingly intelligent and can even replicate natural speech patterns. So, even if it looks like you’re chatting with your friend’s account, there’s a chance that it’s not really them. If you have doubts about whether you’re interacting with a bot or your friend, try asking questions about something only your friend would know, like an inside joke or a shared memory. If they struggle to respond accurately, it could be a red flag.
Phishing Emails and Fake Online Posts
While AI is helping fraudsters impersonate real people, it’s also creating more authentic phishing emails. AI has provided scammers with new opportunities to carry out highly effective and targeted phishing scams. By leveraging advanced language generation techniques, they make phishing emails appear more authentic and persuasive.
With AI, fraudsters can easily analyze data that provides insight to behavioral patterns. Using this information, fraudsters can craft personalized phishing emails that include specific details about the recipient, making them harder to identify. In addition, AI can optimize the formatting of phishing emails to closely resemble genuine ones and manipulate emotions and urgency to prompt elicit impulsive responses.
How can you protect yourself? It is important to stay cautious when interacting with unsolicited emails. Verify the authenticity of email senders, which can be done by hovering your cursor over the “from” display name to see the associated email address. Avoid clicking on suspicious links or downloading attachments from unfamiliar sources. Regularly update your computer’s security software to stay protected from potential threats.
If you are approached with a “too good to be true” investment offer, it is important to exercise caution and thoroughly review any articles or websites associated with it. Could it be AI-generated? Does it make sense? Take the time to verify the legitimacy of the research by cross-referencing it with reputable and trustworthy sources.
AI-Generated Video and Audio for Fraud
Fraudsters are leveraging generative AI for extremely deceptive purposes. Generative AI has the capability to produce fabricated videos, phone calls, and voice-overs in real time. AI can help fraudsters impersonate people including their image and voice, create fake videos and audio, increase urgency in phone calls, and manipulate voice-overs. These manipulated videos, known as “deepfakes,” involve the alteration of images, videos, or audio to create a realistic appearance of the person they’re trying to impersonate.
Deepfakes can be exploited to further fraudulent purposes, including imposter scams. Fraudsters may use these fabricated videos or phone calls to impersonate individuals and initiate wire transfers or other transactions. Deepfakes can also be used in social engineering tactics to deceive victims into urgently sending funds, such as in the prevalent “grandchild in jail” scams.
It is crucial to exercise caution when faced with a sense of urgency. Remain vigilant and verify the authenticity of any suspicious media or communication.
Our team is here to help protect you from these types of fraud and schemes. To learn more about these and other scams, visit our fraud protection resource center.