Artificial Intelligence (AI) in 2025: Spotting the Nefarious
Sam Lewis, Senior Security Consultant

AI can be entertaining and powerful, but it can also be exploited.
Artificial Intelligence (AI) in computing was first predicted in the 1930s and formally born in 1951. That early AI program was created to play checkers (of all games) and could complete a full match “at a reasonable speed” (Copeland, 2025). There’s no mention of how often it won, only that it could finish a game. Fast forward to 1997, and IBM’s Deep Blue defeats world chess champion Garry Kasparov in a rematch. (Deep Blue lost the first match in 1996.)
Jump ahead again to 2022, and we see the launch of ChatGPT. Social media catapulted it into the spotlight, with users sharing its capabilities in real time (Marr, 2023). Just two years later, in 2024, a finance worker paid out $25.6 million after a video call with what appeared to be their CFO and familiar colleagues. Sadly, the employee had unknowingly joined a teleconference full of deepfakes (Chen, 2024).
In less than a century since the idea of AI was introduced, it’s now being exploited for malicious purposes. As of 2025, this blog aims to help readers recognize and guard against these abuses.
Some AI technologies, now freely or cheaply available, can be used to deceive. The term “deepfake” was first coined in 2018 to describe “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something they did not” (Deepfake, 2025).
This is often accomplished using GANs, or generative adversarial networks. Simply put, GANs create realistic images from user input. A bit more technically: a GAN consists of a generator, which creates content, and a discriminator, which evaluates the result and flags it as fake based on real-world examples the AI has learned from.
Another emerging technology is the diffusion model. This process adds random noise to data samples and then reverses the process to create images or video. If that sounds confusing, it’s because it’s based on physics. As Bergmann and Stryker explain:
“Treating pixels like the molecules of a drop of ink spreading out in a glass of water over time… By modeling that diffusion process, then somehow learning to reverse it, an artificial intelligence model can generate new images by simply ‘denoising’ samples of random noise” (Bergmann & Stryker, 2024).
Clear? Maybe. Maybe not. But know this: diffusion models can turn text into images, video, and even voice. Wild, right?
So how can we fight back against deepfakes? Start by looking at past examples. Remember “AI Will Smith eating spaghetti” from 2023? That was two years ago when AI was still glitchy and awkward. In 2025, performances are becoming more convincing, but there are still tells.
When viewing a suspicious image or video, watch for:
- Inconsistent shadows (lighting that shifts unnaturally)
- Unnatural facial features (weird eyes, ears, teeth, or blinking)
- Blurry or warped extremities (like hands morphing in strange ways)
- Text errors (often complete gibberish)
- Lip-sync issues (off-timed or robotic)
- Stiff or jerky motion (lacking fluidity)
Then there are the outlandish scenarios like the “POV: You wake up in insert strange historical event” trend. Some are incredibly well produced. For example, “POV: You Wake Up In 1351 During the Black Plague” has over 18 million views (Aira, 2025).
Some AI-generated media may pass all the visual tests. But you can still validate it by using reverse image searches and metadata analysis before reposting. Google Images can help with reverse lookups, and tools like InVID or FotoForensics can reveal hidden data. Other helpful tools include:
- Hive Moderation (detects AI-generated images and nudity)
- Reality Defender (scans for deepfakes and altered media)
- Deepware Scanner (analyzes videos for deepfake signs)
- Sensity AI (enterprise-level detection of visual misinformation)
If you’re short on time, you can still protect yourself by asking a few key questions:
- Is the source reputable?
- Does the content seem questionable?
- Has this source been trustworthy in the past?
If there’s even a shadow of doubt, assume it might be AI-generated. Verify it using trusted news sources.
There’s also a more emotional angle to AI misuse, particularly in voice-based scams. One troubling trend involves fake phone calls that sound like a loved one in distress, asking for urgent money transfers. These so-called grandparent scams often spoof caller IDs and ask the recipient not to alert “mom and dad,” exploiting emotion (Grandparent Scams, 2025).
One way to protect against this is to establish a family code word, a phrase known only to trusted relatives, used only in emergencies (Family Safety Code Words). That way, if grandma is woken up at 3 a.m. by “Jimmy” claiming to need bail money, she can ask for the code word. No code word? No money. She can then follow up with the family to confirm everyone is safe.
Hopefully, your takeaway from this post is this: AI can be entertaining and powerful, but it can also be exploited. From spaghetti glitches to medieval reenactments, AI-generated content is evolving rapidly. With that progress comes the responsibility to stay alert and informed. The tips above are just a few ways to recognize and resist AI abuse.
References
Aira, S. (2025, February). From the Titanic to the plague, here’s an explainer of the POV wake up AI TikTok trend. Retrieved from The Tab: https://thetab.com/2025/02/18/from-the-titanic-to-the-plague-heres-an-explainer-of-the-pov-wake-up-ai-tiktok-trend
Bergmann, D., & Stryker, C. (2024, August 21). What are diffusion models? Retrieved from IBM: https://www.ibm.com/think/topics/diffusion-models
Chen, H. (2024, February 4). Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’. Retrieved from CNN World: https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
Copeland, B. (2025, April 19). History of Artificial Intelligence (AI). Retrieved from Britannica: https://www.britannica.com/technology/Turing-test
Deep Blue. (n.d.). Retrieved from IBM: https://www.ibm.com/history/deep-blue
Deepfake. (2025, June 10). Retrieved from Merriam-Webster: https://www.merriam-webster.com/dictionary/deepfake
Family Safety Code Words. (n.d.). Retrieved from Revved Up Kids: https://revvedupkids.org/safety-code-words/
‘Grandparent’ Scams Get More Sophisticated. (2025, March 6). Retrieved from fcc.gov: https://www.fcc.gov/consumers/scam-alert/grandparent-scams-get-more-sophisticated
Marr, B. (2023, May 19). A Short History Of ChatGPT: How We Got To Where We Are Today. Retrieved from Forbes: https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/