Using AI Well Matters More Than Just Asking:
Principles for School, Work, and Sharing
The harder question is rarely whether AI can be used, but how far it should be used. Can you submit Gemini output directly? Should AI-assisted summaries be attributed? When content looks suspicious, when do you stop sharing and when do you correct it? This chapter focuses on responsible adoption and action thresholds.
Three-Color Decision Framework: "Should I Share This?"
Conditions: Multiple credible independent media have reported it + reverse image/video search found no earlier version + AI forensic score is low (if used).
Recommendation: Safe to share, but recommend including the original source link (not just a screenshot) so recipients can also verify.
Conditions: Source is questionable but not yet confirmed false + only one source + fact-checking organizations haven't responded yet.
Recommendation: If you feel the information is worth sharing, prefix it with "This message is unverified — please judge carefully." Or wait for a fact-checking organization's report before sharing.
Conditions: Clear disinformation signs + involves deepfakes that could harm others' reputation + matches known fraud patterns.
Recommendation: Don't share. Consider reporting to the platform, or notifying relevant authorities (see list below). If you know someone may have already believed this content, consider proactively providing correction.
How to Correct Misinformation Effectively: The "Backfire Effect" Trap
Research shows that incorrect correction methods can actually strengthen disinformation's memory traces. This is called the "Backfire Effect" — when we directly rebut something someone believes, they may become even more entrenched in their original belief.
However, updated research after 2019 shows the "Backfire Effect" isn't as strong in most situations as early research described. But effective correction still requires following these principles:
- Lead with the correct information, then explain the error: "The correct version is… (correct version). The circulating video used speed manipulation, reducing playback by 25%." Don't start by repeating the false claim itself.
- Explain "why this lie worked": Describe the manipulation technique itself (e.g., "This technique is called a cheapfake — it just needs a free video editor"), helping the other person understand the manipulation mechanism.
- Avoid personal attacks: Criticize "this message" not "the person who spread it." Attacking individuals triggers defensiveness, making them harder to correct.
- Provide an "alternative narrative": The human brain dislikes a vacuum. When you remove one explanation, provide another simultaneously (e.g., "This is actually an old photo from 2018 — it shows the Hualien earthquake aftermath from that year").
Complete Guide to Taiwan Reporting Channels
| Situation | Report To | Contact |
|---|---|---|
| Election disinformation | Central Election Commission | 1800-024-099 (免費) |
| Deepfake fraud, phone fraud | 165 Anti-Fraud Hotline | Dial 165 |
| General disinformation verification | Taiwan FactCheck Center | tfcctw.org |
| Non-consensual intimate images | iWIN Internet Content Watchdog | iwin.org.tw |
| Facebook/Instagram disinformation | Meta 舉報系統 | Three-dot menu → Report |
| YouTube deepfake ads | Google 廣告舉報 | ads.google.com/report |
| LINE group disinformation | LINE 官方舉報 | line.me/support |
Taiwan's Legal Framework for Deepfakes
Taiwan has established the following legal regulations targeting the creation and distribution of deepfakes:
- Criminal Code Article 310 (Defamation): Spreading deepfakes damaging others' reputation: up to 2 years imprisonment or detention. Public insult: up to detention or NT$9,000 fine.
- Presidential and Vice-Presidential Election and Recall Act Article 104: Spreading deepfake false information about candidates during elections: up to 5 years imprisonment, sentence may be increased by 1/2.
- 2023 Gender Equity Education Act Amendment: Explicitly prohibits making or distributing deepfake sexual content using someone's likeness without consent: up to 3 years imprisonment.
- Fraud Aggravation Provisions: Using deepfake technology to commit fraud can increase sentencing by 1/2 under Criminal Code Article 339-4 (maximum 10 years).
Deepfake Defense Protocols for Organizations
The Arup Hong Kong case (2024) illustrates why organizations need to proactively establish deepfake defense mechanisms. Here are core measures organizations should consider implementing:
- Out-of-Band (OOB) Verification Process: Establish a rule that all financial instructions above a set amount (e.g., NT$50,000) must be confirmed through a second completely independent channel (calling a known phone number), regardless of how credible the instruction channel appears.
- Employee Deepfake Identification Training: Annual deepfake identification training including simulated deepfake video call drills (similar to phishing simulation exercises).
- Pre-agreed Challenge Questions: Senior management pre-agree on a set of "challenge questions" that can be asked at the start of video conferences to verify identity.
- Deepfake Incident Response Plan (DIRP): Establish rapid response procedures for deepfake attacks, including PR statement templates, legal notification procedures, and technical forensics activation protocols.
Slide Deck
Case Studies
In 2019, the CEO of a UK energy company (name undisclosed for legal reasons) received a call in which the speaker used a German accent sounding very similar to the German parent company chairman. The "chairman" urgently stated that the company had an imminent emergency payment to a Hungarian supplier, requesting the CEO immediately transfer €220,000 (approximately NT$7.2M). The CEO believed the voice was genuinely the chairman and completed the transfer as instructed.
The caller then requested a second payment. The CEO became suspicious and tried to call back — only to discover the real chairman had never made the call. Security firm Symantec later confirmed this as one of the first publicly documented commercial AI voice cloning fraud cases.
- When receiving "urgent transfer instructions," regardless of how familiar the voice sounds, must confirm through a second independent channel (calling the person's known mobile number directly)
- Establish "Out-of-Band (OOB) Verification" corporate policy: all large transfers require confirmation through two independent channels
- Be highly alert to the word "urgent" — creating time pressure is a core fraud technique
In January 2024, on the eve of the New Hampshire primary, thousands of Democratic voters received a robocall using a highly convincing AI clone of President Biden's voice saying: "The election is in November, not today. Voting today will enable Trump to win easily. Save your vote for the November general election." The message was designed to suppress Democratic voter turnout in the primary.
The FCC determined the calls were organized by political consultant Steve Kramer using commercial AI voice cloning services to generate Biden's voice. Kramer ultimately paid over $6 million in fines, and the case spurred rapid legislation across US states regulating AI voice use in elections.
Be especially alert before elections to these message types: ① "Don't vote" ② "Your polling location or time has changed" ③ "Election canceled or postponed" — these are classic election interference techniques, and AI voice has reduced their cost to near zero. Upon receiving any such message, immediately contact the election commission (Taiwan: 1800-024-099) to verify; don't believe any such claims in phone calls, messages, or social media posts.