AILiteracy Lab
繁中 EN Try It
Evidence Base
The real question is not whether AI may be used, but how you disclose sources, preserve judgment, and avoid outsourcing thinking.
  • UNESCO recommends human-agency, human-oversight, and age-appropriate use in education.
  • Common Sense explicitly warns that AI may be wrong, unsourced, and problematic for plagiarism or school policy.
  • NIST places transparency, explainability, validity, and reliability at the center of trustworthy AI use.
Chapter 05 · Responsible Use

Using AI Well Matters More Than Just Asking:
Principles for School, Work, and Sharing

The harder question is rarely whether AI can be used, but how far it should be used. Can you submit Gemini output directly? Should AI-assisted summaries be attributed? When content looks suspicious, when do you stop sharing and when do you correct it? This chapter focuses on responsible adoption and action thresholds.

Three-Color Decision Framework: "Should I Share This?"

🟢 Low Risk — Safe to Share (With Source)

Conditions: Multiple credible independent media have reported it + reverse image/video search found no earlier version + AI forensic score is low (if used).

Recommendation: Safe to share, but recommend including the original source link (not just a screenshot) so recipients can also verify.

🟡 Medium Risk — Share Cautiously (With Disclaimer)

Conditions: Source is questionable but not yet confirmed false + only one source + fact-checking organizations haven't responded yet.

Recommendation: If you feel the information is worth sharing, prefix it with "This message is unverified — please judge carefully." Or wait for a fact-checking organization's report before sharing.

🔴 High Risk — Don't Share, Consider Reporting

Conditions: Clear disinformation signs + involves deepfakes that could harm others' reputation + matches known fraud patterns.

Recommendation: Don't share. Consider reporting to the platform, or notifying relevant authorities (see list below). If you know someone may have already believed this content, consider proactively providing correction.

How to Correct Misinformation Effectively: The "Backfire Effect" Trap

Research shows that incorrect correction methods can actually strengthen disinformation's memory traces. This is called the "Backfire Effect" — when we directly rebut something someone believes, they may become even more entrenched in their original belief.

However, updated research after 2019 shows the "Backfire Effect" isn't as strong in most situations as early research described. But effective correction still requires following these principles:

✅ Four Principles of Effective Correction
  • Lead with the correct information, then explain the error: "The correct version is… (correct version). The circulating video used speed manipulation, reducing playback by 25%." Don't start by repeating the false claim itself.
  • Explain "why this lie worked": Describe the manipulation technique itself (e.g., "This technique is called a cheapfake — it just needs a free video editor"), helping the other person understand the manipulation mechanism.
  • Avoid personal attacks: Criticize "this message" not "the person who spread it." Attacking individuals triggers defensiveness, making them harder to correct.
  • Provide an "alternative narrative": The human brain dislikes a vacuum. When you remove one explanation, provide another simultaneously (e.g., "This is actually an old photo from 2018 — it shows the Hualien earthquake aftermath from that year").

Complete Guide to Taiwan Reporting Channels

SituationReport ToContact
Election disinformationCentral Election Commission1800-024-099 (免費)
Deepfake fraud, phone fraud165 Anti-Fraud HotlineDial 165
General disinformation verificationTaiwan FactCheck Centertfcctw.org
Non-consensual intimate imagesiWIN Internet Content Watchdogiwin.org.tw
Facebook/Instagram disinformationMeta 舉報系統Three-dot menu → Report
YouTube deepfake adsGoogle 廣告舉報ads.google.com/report
LINE group disinformationLINE 官方舉報line.me/support

Taiwan's Legal Framework for Deepfakes

Taiwan has established the following legal regulations targeting the creation and distribution of deepfakes:

⚖️ Key Legal Bases
  • Criminal Code Article 310 (Defamation): Spreading deepfakes damaging others' reputation: up to 2 years imprisonment or detention. Public insult: up to detention or NT$9,000 fine.
  • Presidential and Vice-Presidential Election and Recall Act Article 104: Spreading deepfake false information about candidates during elections: up to 5 years imprisonment, sentence may be increased by 1/2.
  • 2023 Gender Equity Education Act Amendment: Explicitly prohibits making or distributing deepfake sexual content using someone's likeness without consent: up to 3 years imprisonment.
  • Fraud Aggravation Provisions: Using deepfake technology to commit fraud can increase sentencing by 1/2 under Criminal Code Article 339-4 (maximum 10 years).

Deepfake Defense Protocols for Organizations

The Arup Hong Kong case (2024) illustrates why organizations need to proactively establish deepfake defense mechanisms. Here are core measures organizations should consider implementing:

  • Out-of-Band (OOB) Verification Process: Establish a rule that all financial instructions above a set amount (e.g., NT$50,000) must be confirmed through a second completely independent channel (calling a known phone number), regardless of how credible the instruction channel appears.
  • Employee Deepfake Identification Training: Annual deepfake identification training including simulated deepfake video call drills (similar to phishing simulation exercises).
  • Pre-agreed Challenge Questions: Senior management pre-agree on a set of "challenge questions" that can be asked at the start of video conferences to verify identity.
  • Deepfake Incident Response Plan (DIRP): Establish rapid response procedures for deepfake attacks, including PR statement templates, legal notification procedures, and technical forensics activation protocols.

Slide Deck

Ch.05 · Responsible Use
1 / 4
01 / 04
Three-Color Decision Framework

🟢 Low Risk

Multi-media confirmation + no earlier version found + low forensic score → Safe to share, include source

🟡 Medium Risk

Questionable source, not yet verified → Wait for verification, or share with disclaimer

🔴 High Risk

Clear disinformation signs + potential harm → Don't share, consider reporting

⚠️ Urgent Report

Public safety, election interference, fraud → Report to relevant authorities immediately

02 / 04
Four Principles of Effective Correction
#PrincipleDescription
1Lead with correct versionDon't repeat the false claim first
2Explain the manipulationHelp them understand "how they were fooled"
3Don't attack personallyCriticize the message, not the person
4Provide alternative narrativeRemove old explanation while providing new one
03 / 04
Taiwan Reporting Quick Reference

165

Deepfake fraud, investment scam

1800-024-099

Election-related disinformation

tfcctw.org

General disinformation verification

iwin.org.tw

Non-consensual deepfake intimate images

04 / 04
The Importance of Preserving Evidence

If you are a victim of deepfake fraud or disinformation, preserve the following before reporting:

  • Screenshot or download the suspicious video/image (note the time you viewed it)
  • Save all communication records with the other party (chat logs, emails)
  • Save ad screenshots and ad links (URLs)
  • Document transfer records and timestamps
  • If possible, preserve original account information (usernames, phone numbers)

Case Studies

FAKE · AI Voice
UK Energy Company CEO Voice Clone Fraud (2019)

In 2019, the CEO of a UK energy company (name undisclosed for legal reasons) received a call in which the speaker used a German accent sounding very similar to the German parent company chairman. The "chairman" urgently stated that the company had an imminent emergency payment to a Hungarian supplier, requesting the CEO immediately transfer €220,000 (approximately NT$7.2M). The CEO believed the voice was genuinely the chairman and completed the transfer as instructed.

The caller then requested a second payment. The CEO became suspicious and tried to call back — only to discover the real chairman had never made the call. Security firm Symantec later confirmed this as one of the first publicly documented commercial AI voice cloning fraud cases.

  • When receiving "urgent transfer instructions," regardless of how familiar the voice sounds, must confirm through a second independent channel (calling the person's known mobile number directly)
  • Establish "Out-of-Band (OOB) Verification" corporate policy: all large transfers require confirmation through two independent channels
  • Be highly alert to the word "urgent" — creating time pressure is a core fraud technique
Wall Street Journal, Symantec, 2019
FAKE · Election Interference
Biden AI Voice Robocall: "Don't Vote" (2024)

In January 2024, on the eve of the New Hampshire primary, thousands of Democratic voters received a robocall using a highly convincing AI clone of President Biden's voice saying: "The election is in November, not today. Voting today will enable Trump to win easily. Save your vote for the November general election." The message was designed to suppress Democratic voter turnout in the primary.

The FCC determined the calls were organized by political consultant Steve Kramer using commercial AI voice cloning services to generate Biden's voice. Kramer ultimately paid over $6 million in fines, and the case spurred rapid legislation across US states regulating AI voice use in elections.

Be especially alert before elections to these message types: ① "Don't vote" ② "Your polling location or time has changed" ③ "Election canceled or postponed" — these are classic election interference techniques, and AI voice has reduced their cost to near zero. Upon receiving any such message, immediately contact the election commission (Taiwan: 1800-024-099) to verify; don't believe any such claims in phone calls, messages, or social media posts.

FCC, New York Times, CNN, January–March 2024