We are aware of an issue in Online and Mobile Banking where transaction history is not loading. Our team is working hard to resolve the problem. We apologize for the inconvenience.
We are aware of an issue in Online and Mobile Banking where transaction history is not loading. Our team is working hard to resolve the problem. We apologize for the inconvenience.
We are aware of an issue in Online and Mobile Banking where transaction history is not loading. Our team is working hard to resolve the problem. We apologize for the inconvenience.
You’ve probably heard the saying, “Don’t believe everything you read on the internet.” With the rise of Artificial Intelligence (AI), we can now expand that to, “Don’t believe everything you see or hear.”
AI can be a helpful tool that enhances efficiency, security, and customer experiences. When used by criminals, it has the ability to increase the scale, impact, and sophistication of fraudulent tactics. Criminals use the capabilities of AI to impersonate business executives, send phishing emails or text messages that appear to be from friends or family, and various other malicious activities. Deepfakes are one scheme that has been on the rise.
According to the Oxford English Dictionary, deepfakes are “any of a various media, especially a video, that has been digitally manipulated to replace one person’s likeness convincingly with that of another, often used maliciously to show someone doing something that he or she did not do.” In other words, it is a fake recreation of someone’s face or voice that is used to mislead others. With just a simple voice snippet or photo of an individual, AI tools can compile an audio, video, or photo file to falsely portray someone.
Some deepfakes are easier to detect than others. Less sophisticated deepfakes will have visual anomalies, such as a hand with six fingers. As AI has become better, though, it has become harder to spot fake content based on obvious mistakes alone. In many deepfake photos or videos, people tend to have an electronic sheen, making their skin look impressively smooth. This over-polished veneer can serve as a warning signal. In videos, watch closely to see if the audio lines up with the visuals, analyze the consistency of lighting and shadows, and inspect the details in the background. Irregularities in these aspects may indicate the possibility of a deepfake.
One way criminals use deepfake technology is by tricking unsuspecting individuals into sending money to their accounts. They may create an AI-generated audio file of a CEO telling you to make a wire transfer to a certain account. Once that is completed, the money is in the hands of the criminals.
Because deepfakes are continuing to advance in complexity and realism, practice these safety tips.
If you receive an unexpected phone call or email from someone asking you to send money, verify the request with the individual. If your “CFO” asks for you to release $2 million in funds to a certain account, reach out to him or her at a trusted phone number on file. If you conclude the request was fraudulent, report it to the Federal Trade Commission (FTC) at https://reportfraud.ftc.gov/.
When you are on the internet and come across a photo or video that seems outrageous, surprising, or too good to be true, take a minute to investigate it. Look for the signs of a deepfake to determine if it might be AI-generated.
To learn more about the impact of artificial intelligence in cybersecurity, read our previous blog about AI in Cybercrime.
Share This Article
© 2025 West Bank. All Rights Reserved. Member FDIC.
Equal Housing Lender.
© 2025 West Bank. All Rights Reserved.
Member FDIC.
Equal Housing Lender.