From Knowing AI to Verifying Content:
A Complete Learning Guide
This is not just a deepfake course. It is a broader AI literacy curriculum covering AI capabilities and limits, prompting and verification, common risks, visual and video forensics, and real-world cases so you can build judgment skills for school, work, and public information environments.
Course framing
This front page introduces the curriculum, representative incidents, and key references before readers move into the full chapters.
Quick route
- Start with Chapter 04: Visual & Deepfake Forensics
- Then open Chapter 07: Real-World Cases
- Finish with Chapter 06: Tools & Workflow
Recent real incidents worth teaching from
These cards use event-linked images from the original reports rather than generic stock visuals. They work well as lecture openers, workshop prompts, or quick entry points before the full chapters.

"Meteor-like airstrike on Tel Aviv" AI video
Crisis footage spreads because it looks urgent and cinematic. That makes it a strong teaching case for synthetic video plausibility.
Source: Taiwan FactCheck Center, 2026-03-26
Fake edited "20 years ago Lai speech" clip
Not every false video is fully generated. Some are edited, reframed, and circulated as if they were authentic historical material.
Source: Taiwan FactCheck Center, 2025-03
AI-generated "elderly duet" video
Emotionally warm content is often shared with less skepticism, which makes it useful for teaching public verification habits.
Source: Taiwan FactCheck Center, 2025-02
Leaked fake political audio
Verification is not only visual. Reposted audio clips and platform-transformed voice content create their own misinformation risks.
Source: RFA / Asia Fact Check Lab, 2024-06-11
AI video of insects making flowers bloom instantly
Highly aesthetic “nature magic” clips are useful for teaching how overly perfect motion and timing can still be a red flag.
Source: Taiwan FactCheck Center, 2025-05
When an AI tool explains the news incorrectly
This case broadens the lesson beyond video: generative systems can also present wrong or incomplete information as if it were authoritative.
Source: Taiwan FactCheck Center, 2025-03