AILiteracy Lab
繁中 EN Try It
Evidence Base
Cases are not about memorizing rumors, but learning a repeatable workflow of source, time, visuals, context, and response.
  • MIT reminds us that novelty and emotion help falsehood outrun facts, so case learning must start with why people get pulled in.
  • TFC makes local cases and workflows visible so learners can practice how to verify, not just memorize verdicts.
  • WITNESS shows that synthetic media harms evidence, context, and trust together — not just individual files.
Chapter 07 · Real-World Cases

Real-World Practice:
Judging AI Credibility from Classrooms to Workplaces and Scams

AI literacy ultimately has to return to real situations. Students deal with assignments and research, professionals with summaries and meeting notes, and social users with scam ads and fake media. Each scenario has different judgment criteria. This chapter uses cases to practice what can be used, what must be verified, and what should never be accepted at face value.

📋 How to Read This Chapter
  • For each case, read the "Background" first and try to judge: Real / Fake / Mixed?
  • Then read "Verification Process" and "Final Determination," compare your judgment with actual results
  • Pay special attention to "Tools Used" and "Detection Key" in each case — these are directly applicable skills

Political Manipulation Cases

MIXED · Partial Real · False Context
Taiwan 2024 Presidential Election Deepfake Audio Ads

Background: On the eve of Taiwan's 2024 presidential election, multiple deepfake items involving candidates circulated widely. One of the most noted cases: a voice clip resembling candidate Lai Ching-te claiming to "change his established cross-strait policy position," clearly contradicting his known stance — circulating widely in certain communities.

Taiwan FactCheck Center researchers used: ① Comparing voice features (prosody patterns, voice prints) with large amounts of known authentic Lai Ching-te speech recordings; ② Source tracing of the claimed "speaking occasion" — found no journalist attendance or live broadcast records; ③ AI voice forensic analysis showed the voice's prosody during emotional passages had significantly below-normal variation, showing typical AI voice synthesis characteristics; ④ The candidate's campaign office denied making this statement.

Determined to be AI voice clone fabrication, constituting election interference. The case was referred for investigation; with deepfake audio released just weeks before the election, there was extremely limited time for verification and clarification during the electoral period.

  • Source verification: Any "candidate statement" without corresponding news coverage, journalists present at livestream, or official release is highly likely to be fabricated
  • Consistency check: Is this statement consistent with the candidate's known positions? Drastic position reversals without mainstream media coverage should be highly suspected
台灣事實查核中心, 中央選舉委員會, 2024

Disaster & Emergency Cases

FAKE · False Context
COVID-19 Vaccine "Death Cases" Video (2021)

Background: During the mass COVID-19 vaccination period, multiple videos circulated globally on social media claiming to show "immediate death after vaccination" or "severe adverse reactions" cases, generating massive vaccine hesitancy.

One widely circulated video claimed to show a woman collapsing and dying immediately after receiving a vaccine injection. The footage appeared to be shot in a medical clinic, with a timestamp showing 2021.

Reverse image search: Screenshot key frames for reverse search; found the same footage appearing as early as 2017 in a medical education video about "vasovagal syncope" (brief fainting after injection due to pain/anxiety — not life-threatening); ② Medical context analysis: Personnel's response after the woman fainted was consistent with standard vasovagal syncope procedures, not death events; ③ Clinic equipment and medicine bottles in the video were confirmed through geolocation to be from a different country, inconsistent with the claimed location.

This case perfectly demonstrates the typical "false context" technique: a genuine medical education video (vasovagal syncope) paired with a false "vaccine death" description, creating completely different "meaning." Reverse image search is the fastest tool to expose this technique.

AFP Fact Check, Snopes, WHO Mythbusters, 2021

Corporate Fraud Cases

FAKE · AI Image Fraud
AI-Generated Fake Lawyers, Fake Doctors Professional Fraud (2023-2024)

Background: Between 2023-2024, numerous fraud cases appeared globally using AI-generated "professional" photos: fake law firms (using GAN-generated "lawyer" photos) defrauding immigration/lawsuit clients; fake doctors (using AI-generated "physician" photos) promoting counterfeit medicine on social media; fake investment advisors using AI-generated "successful businessman" photos to build trust before defrauding victims.

  • Real professionals have official records: Real lawyers are registered with bar associations; real doctors have license numbers that can be checked; real companies have registration numbers. Verify credentials first before deciding to engage.
  • GAN face detection: Do reverse image search on suspicious "professional" photos. GAN-generated faces usually have no historical appearance records; tested in Which Face Is Real or Hive Moderation, they typically get high suspicion scores.
  • Video call verification: Require "live video" (not just voice call) and ask them to make unnatural movements (quick head turn, cover face with hand then remove) — deepfakes and AI-generated personas break down most easily in these situations.
FTC, Europol, FBI IC3 Reports, 2023-2024

"Real But Falsely Accused of Being Fake" Cases

REAL · Confirmed Authentic
Bellingcat Russian Soldier Geolocation Case (2022)

Background: In March 2022, multiple photos and videos allegedly showing Russian soldiers in Ukrainian civilian areas spread widely. Russian officials immediately claimed these videos were "deepfakes created by Ukraine," attempting to deny the visual evidence.

Bellingcat investigators confirmed video authenticity through: ① Extracting buildings from the video and finding perfectly matching landmarks in Google Street View (even to specific wall graffiti); ② Confirming military identification numbers on vehicles in the video matched Russian 41st Combined Arms Army armored unit records; ③ Using SunCalc to confirm shadow directions in the video perfectly matched the claimed date and location; ④ AI forensic analysis of the video found no deepfake synthesis signs; ELA analysis showed normal JPEG compression patterns.

This case perfectly demonstrates the practical application of the "Liar's Dividend": Russian officials knew the public feared deepfakes and tried to use "deepfake accusations" to muddy the waters. The correct response isn't to "believe everything" or "doubt everything," but to do systematic authentication work. Geolocation matching, equipment identification, shadow analysis — these methods can both expose fake content and protect authentic visual evidence.

Bellingcat Investigation, March 2022

Self-Test: Three Practice Questions

🧪 Exercise 1: Speed Test

You receive a video in a LINE group showing a famous politician saying something "shocking." The video was just posted and there's no news coverage yet. What should be your first step?

Reference answer: ① Confirm source ② Search video key frame (screenshot then reverse image search) ③ Check official channels ④ Wait for at least one credible media report before forwarding

🧪 Exercise 2: Photo Verification

You see a photo claiming to show "a protest that happened yesterday," but the clothing styles and slogan language look a bit strange to you. What's your verification process?

Reference answer: ① Google/Yandex reverse image search ② TinEye "Sort by Oldest" to find earliest appearance ③ Observe era markers like clothing, signs, car models in the photo ④ Use FotoForensics for EXIF and ELA analysis

🧪 Exercise 3: Audio Assessment

You receive a phone call where the speaker's voice sounds very much like your boss, urgently requesting you transfer funds to a "client account" and saying "don't tell anyone else." What's your response?

Reference answer: No matter how similar the voice, hang up and directly dial your boss's known personal mobile to confirm. "Urgent" + "Keep it secret" + "Transfer money" are three core fraud trigger words — when all three appear together, your alert level should be at maximum.

🎓 Course Complete! Continue with Hands-On Practice

Congratulations on completing the seven-chapter media forensics course. Remember: media literacy is a skill requiring continuous practice. Here are recommended next steps:

  • MIT Detect Fakes — Interactive deepfake identification training
  • Which Face Is Real — GAN synthetic face identification practice
  • Go to this platform and upload real images/videos, read AI forensic reports, understand each detector's output
  • Read the Detector Details page to understand each AI model's academic basis and applicable scenarios