AILiteracy Lab
繁中 EN Try It
Evidence Base
This chapter is about slowing down, sourcing, and judging — not rejecting AI by default.
  • MIT and Science show false news spreads faster and deeper on social platforms, driven mainly by humans rather than bots.
  • Cambridge prebunk / Bad News work supports the idea that practicing manipulation patterns in advance builds resistance.
  • SIFT turns “pause, source-check, trace original context” into a repeatable literacy workflow.
Chapter 01 · Know AI

Know AI Before Trusting It:
What Generative AI Can and Cannot Do

AI can now draft text, summarize information, generate images, and imitate voices. But fluent does not mean correct, and realistic does not mean authentic. The first step of AI literacy is not deepfake detection alone, but understanding the capabilities and limits of generative AI — and when humans are most likely to mistake it for authority.

Why Disinformation Spreads So Fast

In 2018, an MIT Media Lab team published a Science study analyzing about 126,000 fact-checked rumors and roughly 3 million retweets on Twitter from 2006 to 2017. Their widely cited finding: false news spread faster and deeper, including roughly 6× faster spread, 70% more unique-user reach, and up to 10× deeper cascades. MIT News / Science 2018

The authors also emphasized that this amplification was not mainly caused by bots; it was largely human users who accelerated sharing through novelty and emotion. That matters for AI literacy, because the problem is often not only that something false is created, but that ordinary people help scale it. MIT News / Science 2018

Understanding this mechanism is critical: if disinformation is an "emotional weapon," then the most effective first defense is deliberately "hitting the brakes" at the moment your emotions are triggered.

Cognitive Bias: Your Brain Is Wired to Be Fooled

The human brain has two decision-making systems, which psychologist Daniel Kahneman in "Thinking, Fast and Slow" calls "System 1" and "System 2":

  • System 1 (Fast): Automatic, intuitive, emotional. Handles 95%+ of daily decisions, fast but error-prone.
  • System 2 (Slow): Analytical, rational, deliberate. Slow, resource-intensive, but highly accurate.

Disinformation is designed to let your System 1 "hijack" System 2. Here are the most commonly exploited cognitive biases:

📌 Key Concept: Five Major Cognitive Biases
  • Confirmation Bias: We tend to believe information that confirms our existing views — the most exploited weakness by disinformation creators. If you lean left politically, you're more likely to share anti-right fake news without verifying; and vice versa.
  • Illusory Truth Effect: The more often you see a message, the more your brain treats it as true. This is why rumors love to circulate repeatedly in group chats.
  • Fluency Bias: Well-formatted content with high-quality video makes people unconsciously feel it's more trustworthy. Deepfake videos exploit this directly — "It looks so clear, it must be real, right?"
  • Social Proof: "So many people shared this, it must be true, right?" Share count doesn't equal truthfulness, but our brain's shortcut often calculates that way.
  • Scarcity Principle: "Share fast before it's deleted!" Creating urgency leaves no time for thinking — a common accelerant in disinformation.

Seven Warning Trigger Phrases

Disinformation creators have a fixed toolkit of "trigger phrases" — when these appear, your System 1 automatically activates. Learning to recognize them can dramatically improve your defenses:

Trigger TypeTypical PhrasesDesigned Intent
Secrecy「獨家曝光」「主流媒體封鎖」Makes you feel you have special knowledge
Urgency「緊急!快轉」「12小時內刪除」Removes time for verification
Social Pressure「真正關心的人才會轉」「不傳是沒良心」Uses moral coercion instead of logic
Conspiracy「他們不想讓你知道」「政府在掩蓋」Creates "us vs. them" framework
False Authority「研究顯示」「專家透露」(無出處)Borrows trust halo without accountability

The SIFT Framework: Four Steps for Media Literacy

SIFT is a four-step framework articulated by information literacy educator Mike Caulfield. Its core is not memorizing correct answers, but stopping first, checking the source first, and then returning to content and original context. This is especially useful for generative AI outputs, which are often well-written before they are well-verified. SIFT

🎯 SIFT 框架
SStopStop. Don't make decisions when emotionally charged. First ask: "Am I feeling angry/surprised/afraid right now? Why?"
IInvestigate the SourceCheck the source first, content second. Before deep-reading an article, spend 30 seconds searching "Who is this source? Are they credible?"
FFind Better CoverageSearch for credible media reporting the same story. For major events, credible media typically all cover them. If there's only one source, be especially cautious.
TTrace ClaimsTrace the original source of images/video/data. Reverse image search, TinEye, InVID are your tools. Find the origin to evaluate authenticity.

Inoculation Theory: Why Practicing in Advance Works

Cambridge University psychology professor Sander van der Linden spent years studying one question: how do you make people "immune" to disinformation? His answer is "Inoculation Theory."

Like a medical vaccine, inoculation theory suggests that if you expose people to weakened versions of manipulation techniques in advance — while explicitly labeling them as manipulation — they become less likely to trust similar tactics later. Cambridge work around Bad News and related prebunking studies repeatedly supports this direction: not by turning people into experts overnight, but by making them more willing to pause, spot patterns, and resist first-impression pull. Cambridge

✅ Why this exercise works
Every manipulation technique explanation you read here, every real case study, is vaccinating your brain. The more disinformation patterns you're exposed to, the faster you'll recognize them when you encounter them in real life.

30-Day Pause Habit Building Plan

Research shows media literacy isn't a skill you master by "reading one article" — it's a habit that requires deliberate practice to internalize. Here's a four-week progressive plan:

  • Week 1: Every time you see emotionally charged content on social media, pause for 30 seconds before deciding whether to share. Count how many times you "paused" this week.
  • Week 2: For every piece of content that gives you a strong reaction, actively find one "opposing perspective" or a different reporting angle.
  • Week 3: Review content you've shared in the past month; pick five items and do post-hoc fact-checking. Document the results.
  • Week 4: Teach a family member or friend one step of the SIFT framework. Research shows teaching others is the most effective way to consolidate your own skills.
📊 Why This Is AI Literacy, Not Only Media Literacy
UNESCO and NIST both place generative AI use inside a framework of human oversight, source verification, risk governance, and transparency. That means AI literacy is not only about spotting fake images. It is also about knowing when AI can help, when you must verify the source, and when human judgment must remain primary. UNESCO · NIST

Slide Deck: Chapter Key Points

Ch.01 · Know AI
1 / 6
01 / 06
Why Does Disinformation Spread So Fast?

MIT Study (2018) Three Core Findings:

False news spreads 6× faster than true news

70%

False news reaches 70% more unique users

🤝 Human-Driven

Main driver is real humans, not bots

😱 Emotion Trigger

False news is "more novel and surprising"

02 / 06
System 1 vs System 2

⚡ System 1 (Fast)

Automatic, intuitive, emotional. Disinformation's target — makes you share without thinking.

🧠 System 2 (Slow)

Analytical, rational, deliberate. Detecting disinformation requires activating this system.

Your task: Deliberately activate System 2 when emotions are triggered. "Pause 30 seconds" is that switch.

03 / 06
SIFT 框架
SStopPause, don't share immediately
IInvestigate SourceCheck source first, then content
FFind CoverageFind credible media coverage
TTrace ClaimsTrace original source of images/video
04 / 06
Five Warning Trigger Phrases

🚨 Secrecy

「獨家曝光」「主流媒體封鎖」

⏰ Urgency

「緊急轉發」「12小時刪除」

🤝 Social Pressure

「真正關心的人才會轉」

🎭 Conspiracy

「他們不想讓你知道」

05 / 06
Inoculation Theory

Research by Cambridge University Professor Sander van der Linden:

💉 Like a Vaccine

Pre-exposure to weakened manipulation + explanation = psychological immunity

Prebunk

Pre-exposure to manipulation tactics helps people resist first-impression pull later

06 / 06
Chapter Action Checklist
  • Put a "SIFT" reminder note on your phone homescreen
  • For the next week, count each time you "pause before sharing"
  • Identify five emotional trigger phrases you commonly encounter
  • Explain "confirmation bias" to one family member
  • Continue to Chapter 2: Verify & Prompt

Case Studies

FAKE
Nancy Pelosi "Drunk Speech" Video (2019)

In May 2019, a video of Nancy Pelosi (then Speaker of the House) appearing to slur her words and speak slowly went viral on Facebook, accumulating over 2 million views within 48 hours. Comment sections were flooded with "she's drunk" and "she has mental issues," with multiple right-wing media figures and some Trump supporters widely sharing it.

The technique used to create this video was extremely low-tech: no AI was used whatsoever — just free video editing software that slowed playback to 75%. In the original video, Pelosi's speech was perfectly fluent; at reduced speed, anyone would sound slurred.

Washington Post tech reporters first noticed the abnormal speech pace, used audio analysis software to find frequencies reduced by ~25% (consistent with 25% slower playback), and directly compared with the original C-SPAN broadcast recording to confirm the video was slowed. The entire verification process took less than 20 minutes.

This case perfectly demonstrates the "emotion first, verification later" mechanism: the anger and contempt triggered by the video led massive numbers of users to decide to share within seconds, skipping any verification entirely. Facebook later decided not to remove the video (since it wasn't "synthetically fabricated") but added a warning label — by which time the video had already spread millions of times.

Even the lowest-tech disinformation without any AI can cause major political impact. The core of "stopping" is: when you see a video that makes you feel strong contempt or anger toward a political figure, that emotion itself is a warning signal — pause first, then search for the "original version."

Washington Post, AFP Fact Check, May 2019
FAKE · AI Voice
Baltimore High School Principal AI Voice Deepfake (2024)

In January 2024, an audio recording allegedly of Pikesville High School principal Eric Eiswert making racist remarks circulated widely on social media, quickly generating community outrage. The "principal" in the recording was heard using discriminatory language and complaining about Black students and teachers at the school.

Principal Eiswert was immediately placed on administrative leave pending investigation. But as the investigation deepened, police gradually focused on a surprising suspect: the school's athletic director Dazhon Darien, who was at the time in dispute with the principal over performance issues.

AI voice forensic analysis showed the recording's prosody and emotional variation patterns had statistically significant differences from known real recordings of Eiswert. The voice's "breathing rhythm" in emotional passages showed typical AI synthesis characteristics — excessively smooth, lacking natural micro-variations. Additionally, background noise showed slight "splicing" between certain segments.

Darien was arrested in April 2024 and charged with electronic harassment, theft, and cybercrime — becoming one of the first criminal prosecutions in the US involving AI deepfake audio. Principal Eiswert was later reinstated, and the case became a landmark example of AI misuse in the US school system.

The most important lesson from this case: An audio recording that fills you with moral outrage (especially involving racism, sex scandals, corruption, and other highly sensitive topics) is precisely when you most need to pause and verify. The higher the emotional intensity, the greater the need for verification. Before official investigation results, no one should spread such unverified recordings on social media.

CNN, Baltimore Sun, NBC News, January–April 2024

Practical Action Checklist

  • When encountering emotionally charged content, pause 30 seconds before deciding to share
  • Ask yourself: "What emotion does this content trigger in me? Why would the creator want me to feel this?"
  • Watch for trigger phrases: "exclusive," "urgent," "share now," "mainstream media won't cover this"
  • Find the specific claim in the video content, then independently verify that claim
  • Search "[event name] + fact check" to find if fact-checking organizations have published reports
  • If unsure, you can choose "not to share" or label "unverified, please be cautious" before sharing