DeepFake & Media Forensics: Academic Overview

  • Home
  • DeepFake & Media Forensics: Academic Overview

Preview Page · Not in Top Menu

DeepFake & Media Forensics

This academic-facing page is not just a short publication list. It connects ACVLab’s research line to real misinformation incidents, source-backed teaching materials, and public-interest verification workflows. The goal is to show how research questions emerge from actual cases rather than only from benchmark design.

How This Page Is Structured

Research Questions

What makes DeepFake detection break under unstable face sequences, reposting, re-encoding, or audio-only circulation?

Real Incidents

Every highlighted case below is tied to a concrete public event, not a generic stock example.

Reusable Teaching

The AI literacy course turns research concepts into evidence-backed modules for students, collaborators, and outreach.

Deployment Bridge

This line sits between lab research, media verification, fact-checking, and public communication.

Real Incidents That Motivate the Research

TFC Tel Aviv AI-generated attack video case
Case *1

Conflict Footage That Looks Cinematic

A viral clip claimed that Tel Aviv was under a meteor-like airstrike. TFC identified it as highly likely AI-generated. This is a good example of synthetic conflict footage that spreads because it looks visually dramatic and urgent.

Research relevance: synthetic video realism, rapid reposting, visual plausibility under crisis conditions

Open TFC report

TFC fake edited Lai Ching-te speech video case
Case *2

Edited Speech, Reframed Narrative

The so-called “20 years ago Lai Ching-te speech” case is not only about generation. It is also about editing, reframing, and fabricated context. This is exactly why real-world media forensics cannot stop at a single detector score.

Research relevance: manipulated political video, context forgery, source tracing beyond pure face synthesis

Open TFC report

Taiwan News fake audio case
Case *3

Audio Misinformation Travels Differently

The fake audio of Taiwan’s president criticizing his predecessor shows why verification cannot stay video-only. Audio claims, reposted clips, and platform-level distortion create different failure modes from clean benchmark settings.

Research relevance: cross-modal misinformation, reposting distortion, practical verification workflow design

Open Taiwan News coverage

TFC AI elderly duet case
Case *4

Emotionally Warm, Socially Shareable, and False

The “elderly duet” case shows how AI-generated clips can look harmless or uplifting while still training audiences to lower their guard. These cases matter because social sharing often rewards emotion before verification.

Research relevance: lightweight synthetic signals, public verification behavior, misinformation spread incentives

Open TFC report

READr Deepfake event archive
Case Archive *5

Deepfake Events Need Longitudinal Analysis

READr’s deepfake event archive is useful because it moves beyond one-off panic. It shows the range of cases, recurring manipulation patterns, and why some false media matters more than others.

Research relevance: taxonomy building, public-risk analysis, case selection for teaching and evaluation

Open READr article

Guideline *6

Verification Practice for Newsrooms

FactLink’s AI verification guideline is worth showing alongside the cases because it turns the lab’s broader argument into newsroom-facing practice: suspicious imagery should be checked with workflow, not intuition.

Research relevance: public deployment, verification protocols, newsroom literacy

Open FactLink guide

Representative Research Outputs

GRACEv2 overview
GRACEv2. Robust DeepFake detection under unstable face sequences. This line addresses a very practical problem: face tracks in the wild are often incomplete, unstable, and badly re-encoded. [arXiv]
UMCL overview
UMCL. Cross-compression-rate DeepFake detection. This is the research backbone behind the repeated course message that reposted or recompressed media behaves differently from clean source files. [IJCV]

Reusable Teaching and Verification Stack

Source Notes

  1. TFC, “網傳『特拉維夫被流星雨式空襲』影像,極可能為AI生成”
  2. TFC, “20年前賴清德演講是虛構變造的影片”
  3. RFA / Asia Fact Check Lab, leaked audio case
  4. TFC, “九旬老夫婦二重奏是虛構的AI生成影片”
  5. READr deepfake event archive
  6. FactLink, AI for Trust newsroom guideline

All external images on this page are tied to the corresponding original report or article above. They are used as cited event references, not as detached stock visuals.