{"id":34,"date":"2018-10-30T14:08:36","date_gmt":"2018-10-30T06:08:36","guid":{"rendered":"http:\/\/jess1029.ddns.net\/wordpress\/?page_id=34"},"modified":"2026-03-25T00:06:00","modified_gmt":"2026-03-24T16:06:00","slug":"research","status":"publish","type":"page","link":"https:\/\/cchsu.info\/wordpress\/research\/","title":{"rendered":"Research"},"content":{"rendered":"<div id=\"pl-34\"  class=\"panel-layout\" ><div id=\"pg-34-0\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-0\" ><div id=\"pgc-34-0-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-0-0-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"0\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<p class=\"page-language-switcher\"><strong>Language:<\/strong> English | <a href=\"https:\/\/cchsu.info\/wordpress\/zh\/research\/\">\u7e41\u9ad4\u4e2d\u6587<\/a><\/p>\n<h1>Research Vision \u7814\u7a76\u4e3b\u8ef8<\/h1>\n<p>Advanced Computer Vision Lab \u2014 <strong>A<\/strong>ssured <strong>C<\/strong>omputer <strong>V<\/strong>ision: <strong>L<\/strong>ean, <strong>A<\/strong>utonomous, <strong>B<\/strong>road-Spectrum<\/p>\n<p>As generative AI blurs the boundary between authentic and fabricated media, autonomous systems demand vision that never fails silently, and Earth observation enters a data-rich new era, the bar for deployable visual intelligence keeps rising. ACVLab responds with four interlocking research pillars.<\/p>\n<p><b>Assured Visual Intelligence<\/b> ensures that every visual AI output can be trusted \u2014 whether detecting DeepFakes under heavy compression, defending against adversarial perturbations, or authenticating media through proactive watermarking \u2014 providing the accountability that forensic, medical, and regulatory settings require.<\/p>\n<p><b>Lean Visual Architectures<\/b> rethink computation at every level of abstraction: prefix-scan reformulations of exact attention (ELSA), bitstream-level forensics that skip pixel decoding entirely, adaptive quantization that preserves accuracy at ultra-low bit widths (QuantTune\/FracQuant), and joint transmission-restoration for bandwidth-constrained satellites \u2014 cutting latency, memory, and energy cost for sustainable, real-time deployment.<\/p>\n<p><b>Autonomous Visual Perception<\/b> extends vision from 2D images into 3D physical space: material-aware scene reconstruction with hyperspectral unmixing, BEV adversarial defense for self-driving (BFDM), physics-aligned shadow and reflection removal that feeds robust features to downstream robotic pipelines (PhaSR, ReflexSplit), and uncertainty-aware 3D annotation for autonomous driving datasets.<\/p>\n<p><b>Broad-Spectrum Scientific Sensing<\/b> pushes perception beyond the visible: universal hyperspectral restoration via vision-language prompts (PromptHSI), real-time CubeSat compressed sensing recognized with the Future Technology Award, hyperspectral pansharpening through sparse spectral representations (S<sup>3<\/sup>RNet), and cross-spectral forgery detection that reveals manipulation invisible to RGB analysis.<\/p>\n<p>These pillars do not operate in isolation. Hyperspectral forensics merges trust with spectral sensing. On-satellite real-time inference merges efficiency with broad-spectrum data. BEV adversarial defense merges trust with embodied perception. This cross-pillar synergy is not accidental \u2014 it reflects a single underlying conviction: deployment-grade visual intelligence must be simultaneously trustworthy, efficient, embodied, and perceptually complete.<\/p>\n<h2>Research Pillars<\/h2>\n<ul>\n<li><strong>Autonomous Visual Perception<\/strong>: PhaSR, ReflexSplit, autonomous driving, tracking, embodied perception, 3D reconstruction<\/li>\n<li><strong>Assured Visual Intelligence<\/strong>: GRACEv2, UMCL, DDD-Net, DeepFake detection, proactive authentication, trustworthy media analysis<\/li>\n<li><strong>Broad-Spectrum Scientific Sensing<\/strong>: PromptHSI, S<sup>3<\/sup>RNet, CubeSat compressed sensing, remote sensing, satellite imaging<\/li>\n<li><strong>Lean Visual Architectures<\/strong>: ELSA, QuantTune, FracQuant, bitstream-level inference, CubeSat on-board processing, edge deployment<\/li>\n<\/ul>\n<p>A short introduction to my research: [<a href=\"https:\/\/www.dropbox.com\/scl\/fi\/wjivz198w7soqxu6t0hlz\/Recent-Research_DFD2HSI_v2.pdf?rlkey=abaukudj7vnz338oayo0pv17j&amp;st=3lfjprys&amp;dl=0\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] (Latest updated: Oct. 2024)<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-1\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-1\" ><div id=\"pgc-34-1-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-1-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"1\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/PhaSR_teaser.png\" title=\"Research\" alt=\"PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-1-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-1-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"2\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Robust Shadow Removal<\/h1>\n<p><strong>PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors<\/strong><\/p>\n<p>Accepted to <strong>IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2026<\/strong>.<\/p>\n<p>Shadow removal under complex and multi-source lighting is hindered by the mismatch between physical illumination priors and learned features. PhaSR couples physically aligned normalization with geometry-semantic rectification to deliver robust shadow removal that generalizes beyond traditional single-light settings.<\/p>\n<p><strong>Research Direction.<\/strong> Autonomous Visual Perception \/ Robust Scene Recovery<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/2601.17470\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/ming053l\/PhaSR\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-2\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-2\" ><div id=\"pgc-34-2-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-2-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"3\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/ReflexSplit_vis.png\" title=\"Research\" alt=\"ReflexSplit: Single Image Reflection Separation via Layer Fusion-Separation\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-2-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-2-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"4\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Reflection Separation in the Wild<\/h1>\n<p><strong>ReflexSplit: Single Image Reflection Separation via Layer Fusion-Separation<\/strong><\/p>\n<p>Accepted to <strong>IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2026<\/strong>.<\/p>\n<p>Reflections on glass introduce nonlinear layer mixing that often breaks existing separation networks. ReflexSplit uses dual-stream fusion-separation blocks and curriculum training to achieve robust performance on both synthetic and real-world benchmarks.<\/p>\n<p><strong>Research Direction.<\/strong> Autonomous Visual Perception \/ Robust Scene Recovery<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/2601.17468\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/wuw2135\/ReflexSplit\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-3\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-3\" ><div id=\"pgc-34-3-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-3-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"5\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/ELSA_teaser.png\" title=\"Research\" alt=\"ELSA: Exact Linear-Scan Attention for Fast and Memory-Light Vision Transformers\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-3-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-3-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"6\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Efficient AI Inference<\/h1>\n<p><strong>ELSA: Exact Linear-Scan Attention for Fast and Memory-Light Vision Transformers<\/strong><\/p>\n<p>Accepted to <strong>CVPR 2026 Findings (CVPRF)<\/strong>.<\/p>\n<p>ELSA reformulates exact softmax attention as a prefix scan over an associative monoid, achieving memory-light inference with provable FP32 stability and no retraining. Implemented in Triton and CUDA C++, it improves deployability on both data-center and edge hardware.<\/p>\n<p><strong>Research Direction.<\/strong> Lean Visual Architectures \/ Hardware-Agnostic Inference<\/p>\n<p>ArXiv coming soon<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-4\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-4\" ><div id=\"pgc-34-4-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-4-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"7\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/QuantTune_method.png\" title=\"Research\" alt=\"QuantTune: Optimizing Model Quantization with Adaptive Outlier-Driven Fine Tuning\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-4-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-4-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"8\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Quantization-Friendly Deployment<\/h1>\n<p><strong>QuantTune: Optimizing Model Quantization with Adaptive Outlier-Driven Fine Tuning<\/strong><\/p>\n<p>Published in <strong>IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR) 2025<\/strong>.<\/p>\n<p>QuantTune addresses outlier-driven dynamic range amplification during Transformer quantization and substantially reduces accuracy loss under low-bit settings. The method requires no extra inference-time hardware complexity and transfers across ViT, BERT, and OPT models.<\/p>\n<p><strong>Research Direction.<\/strong> Lean Visual Architectures \/ Quantization-Aware Deployment<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/2403.06497\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11225997\/\" target=\"_blank\" rel=\"noopener noreferrer\">IEEE Xplore<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-5\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-5\" ><div id=\"pgc-34-5-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-5-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"9\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/PromptHSI_teaser.png\" title=\"Research\" alt=\"PromptHSI: Universal Hyperspectral Image Restoration with Vision-Language Modulated Frequency Adaptation\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-5-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-5-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"10\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Universal Hyperspectral Restoration<\/h1>\n<p><strong>PromptHSI: Universal Hyperspectral Image Restoration with Vision-Language Modulated Frequency Adaptation<\/strong><\/p>\n<p>Published in <strong>IEEE Transactions on Geoscience and Remote Sensing (TGRS), Early Access, Feb. 2026<\/strong>.<\/p>\n<p>PromptHSI is a universal all-in-one framework for hyperspectral restoration that combines frequency-aware modulation with vision-language guided prompt learning. A single model can handle cloud occlusion, blur, noise, and spectral band loss across remote sensing scenarios.<\/p>\n<p><strong>Research Direction.<\/strong> Broad-Spectrum Scientific Sensing \/ Hyperspectral Restoration<\/p>\n<p>[<a href=\"https:\/\/ieeexplore.ieee.org\/document\/11371358\" target=\"_blank\" rel=\"noopener noreferrer\">IEEE Xplore<\/a>] [<a href=\"https:\/\/arxiv.org\/abs\/2411.15922\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/chingheng0808\/PromptHSI\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-6\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-6\" ><div id=\"pgc-34-6-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-6-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"11\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/TIFS_GRACEv2_overview.png\" title=\"Research\" alt=\"Towards Robust DeepFake Detection under Unstable Face Sequences: Adaptive Sparse Graph Embedding with Order-Free Representation and Explicit Laplacian Spectral Prior\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-6-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-6-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"12\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Media Security &amp; DeepFake Robustness<\/h1>\n<p><strong>Towards Robust DeepFake Detection under Unstable Face Sequences: Adaptive Sparse Graph Embedding with Order-Free Representation and Explicit Laplacian Spectral Prior<\/strong><\/p>\n<p>Submitted to <strong>IEEE Transactions on Information Forensics and Security (TIFS)<\/strong>.<\/p>\n<p>GRACEv2 targets unstable face sequences caused by compression, occlusion, and shuffled or missing frames. By combining order-free temporal graph embedding with an explicit Laplacian spectral prior, it improves robust DeepFake detection under severe real-world disruptions.<\/p>\n<p><strong>Research Direction.<\/strong> Assured Visual Intelligence \/ Robust DeepFake Detection<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/2512.07498\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-7\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-7\" ><div id=\"pgc-34-7-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-7-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"13\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/research_assets\/20260324\/IJCV_UMCL_paradigm.jpg\" title=\"Research\" alt=\"UMCL: Unimodal-Generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-7-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-7-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"14\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Cross-Compression DeepFake Detection<\/h1>\n<p><strong>UMCL: Unimodal-Generated Multimodal Contrastive Learning for Cross-compression-rate Deepfake Detection<\/strong><\/p>\n<p>Published in <strong>International Journal of Computer Vision (IJCV)<\/strong>, Jan. 2026.<\/p>\n<p>UMCL synthesizes compression-robust multimodal cues, including rPPG, temporal landmarks, and semantic embeddings, from a single visual input. The framework improves cross-compression DeepFake detection while preserving interpretable feature relationships.<\/p>\n<p><strong>Research Direction.<\/strong> Assured Visual Intelligence \/ Cross-Compression Forensics<\/p>\n<p>[<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02606-0\" target=\"_blank\" rel=\"noopener noreferrer\">Springer<\/a>] [<a href=\"https:\/\/doi.org\/10.1007\/s11263-025-02606-0\" target=\"_blank\" rel=\"noopener noreferrer\">DOI<\/a>] [<a href=\"https:\/\/arxiv.org\/abs\/2511.18983\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-8\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-8\" ><div id=\"pgc-34-8-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-8-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"15\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"http:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/11\/drct_fix.gif\" title=\"Research\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-8-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-8-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"16\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>New SOTA SR Model<\/h1>\n<p><strong>DRCT: Saving Image Super-Resolution away from Information Bottleneck<\/strong><\/p>\n<p>Presented at <strong>IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024, NTIRE Workshop<\/strong> <span style=\"color: red;\">[Oral]<\/span>.<\/p>\n<p><a href=\"https:\/\/cchsu.info\/\" target=\"_blank\" rel=\"noopener noreferrer\">Chih-Chung Hsu<\/a>, Chia-Ming Lee, Yi-Shiuan Chou<\/p>\n<p><strong>Research Direction.<\/strong> Lean Visual Architectures \/ Efficient Super-Resolution<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/pdf\/2404.00722.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/arxiv.org\/abs\/2404.00722\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/ming053l\/DRCT\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>] [<a href=\"https:\/\/allproj002.github.io\/drct.github.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>] [<a href=\"https:\/\/drive.google.com\/file\/d\/1zR9wSwqCryLeKVkJfTuoQILKiQdf_Vdz\/view?usp=sharing\" target=\"_blank\" rel=\"noopener noreferrer\">Poster<\/a>] [<a href=\"https:\/\/docs.google.com\/presentation\/d\/1MxPPtgQZ61GFSr3YfGOm9scm23bbbXRj\/edit?usp=sharing&amp;ouid=105932000013245886245&amp;rtpof=true&amp;sd=true\" target=\"_blank\" rel=\"noopener noreferrer\">Slides<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-9\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-9\" ><div id=\"pgc-34-9-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-9-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"17\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"http:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/11\/4SFL.png\" title=\"Research\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-9-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-9-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"18\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Semi-Supervised Learning in CT Scan Detection<\/h1>\n<p><strong>A Closer Look at Spatial-Slice Features for COVID-19 Detection<\/strong><\/p>\n<p>Presented at <strong>IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024, DEF-AI-MIA Workshop<\/strong>.<\/p>\n<p><a href=\"https:\/\/cchsu.info\/\" target=\"_blank\" rel=\"noopener noreferrer\">Chih-Chung Hsu<\/a>, Chia-Ming Lee, Yang Fan Chiang, Yi-Shiuan Chou, Chih-Yu Jiang, Shen-Chieh Tai, Chi-Han Tsai<\/p>\n<p><strong>Research Direction.<\/strong> Assured Visual Intelligence \/ Medical Imaging<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/2404.01643.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/arxiv.org\/abs\/2404.01643\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/ming053l\/E2D\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>] [<a href=\"https:\/\/allproj001.github.io\/cov19d.github.io\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-10\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-10\" ><div id=\"pgc-34-10-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-10-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"19\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"http:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/11\/RTCS.png\" title=\"Research\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-10-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-10-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"20\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Ultra Fast Hyperspectral Image Compressive Sensing<\/h1>\n<p><strong>Real-Time Compressed Sensing for Joint Hyperspectral Image Transmission and Restoration for CubeSat<\/strong><\/p>\n<p>Published in <strong>IEEE Transactions on Geoscience and Remote Sensing (TGRS)<\/strong>.<\/p>\n<p><strong>Future Tech Award (\u672a\u4f86\u79d1\u6280\u734e)<\/strong><\/p>\n<p><a href=\"https:\/\/cchsu.info\/\" target=\"_blank\" rel=\"noopener noreferrer\">Chih-Chung Hsu<\/a>, Chih-Yu Jian, Eng-Shen Tu, Chia-Ming Lee, Guan-Lin Chen<\/p>\n<p><strong>Research Direction.<\/strong> Broad-Spectrum Scientific Sensing \u00d7 Lean Visual Architectures<\/p>\n<p>[<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10474407\" target=\"_blank\" rel=\"noopener noreferrer\">IEEE Xplore<\/a>] [<a href=\"https:\/\/github.com\/ming053l\/RTCS\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-11\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-11\" ><div id=\"pgc-34-11-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-11-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"21\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/pos2.png\" width=\"486\" height=\"477\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/pos2.png 486w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/pos2-300x294.png 300w\" sizes=\"(max-width: 486px) 100vw, 486px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-11-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-11-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"22\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>COVID-19 Symptoms Detection in CT Scan<\/h1>\n<p><strong>Selected challenge papers and results<\/strong><\/p>\n<p><strong>IEEE ECCV Workshop 2022<\/strong> [1st place in COV19D challenge]<\/p>\n<p><a href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021W\/MIA-COV19D\/papers\/Kollias_MIA-COV19D_COVID-19_Detection_Through_3-D_Chest_CT_Image_Analysis_ICCVW_2021_paper.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">Spatial-Slice Feature Learning using Visual Transformer and Essential Slices Selection Module for COVID-19 Detection of CT Scans in the Wild<\/a><\/p>\n<p><strong>IEEE ICCV Workshop 2021<\/strong> [3rd place in COV19D challenge]<\/p>\n<p><a href=\"https:\/\/ieeexplore.ieee.org\/document\/9607525\" target=\"_blank\" rel=\"noopener noreferrer\">Adaptive Distribution Learning with Statistical Hypothesis Testing for COVID-19 CT Scan Classification<\/a><\/p>\n<p>Our models are designed for noisy, in-the-wild CT scans and remain robust across varying spatial and slice resolutions.<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-12\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-12\" ><div id=\"pgc-34-12-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-12-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"23\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/IT-SMP_flowchart.png\" width=\"583\" height=\"443\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/IT-SMP_flowchart.png 583w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/IT-SMP_flowchart-300x228.png 300w\" sizes=\"(max-width: 583px) 100vw, 583px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-12-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-12-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"24\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Social Media Prediction as Longitudinal Task (2022-)<\/h1>\n<p><strong>A Comprehensive Study of Spatiotemporal Feature Learning for Social Media Popularity Prediction<\/strong><\/p>\n<p>Published in <strong>ACM Multimedia 2022<\/strong>.<\/p>\n<p><strong>C.C. Hsu<\/strong>, P.J. Tsai, T.C. Yeh, and X.U. Hou<\/p>\n<p>We reformulate social media popularity prediction as an identity-preserving longitudinal task and study how multimodal temporal features improve prediction reliability over time.<\/p>\n<p>[<a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.1145\/3503161.3551593\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>]<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-13\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-13\" ><div id=\"pgc-34-13-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-13-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"25\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD.png\" width=\"3189\" height=\"2013\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD.png 3189w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD-300x189.png 300w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD-1024x646.png 1024w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD-768x485.png 768w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD-1536x970.png 1536w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2022\/12\/result_ACD-2048x1293.png 2048w\" sizes=\"(max-width: 3189px) 100vw, 3189px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-13-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-13-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"26\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Semantic Segmentation for Autonomous Driving (2021-)<\/h1>\n<p><strong>Selected papers for robust and efficient scene understanding<\/strong><\/p>\n<p><strong>IEEE ICME Workshop 2022<\/strong><\/p>\n<p>Augmented-Training-Aware Bisenet for Real-Time Semantic Segmentation [<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9859497\/\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>]<\/p>\n<p><strong>IEEE ICASSP 2022<\/strong><\/p>\n<p>DCSN: Deformable Convolutional Semantic Segmentation Neural Network for Non-Rigid Scenes [<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9747586\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>]<\/p>\n<p>These projects focus on stable, real-time semantic understanding for autonomous driving, balancing robustness and low-compute deployment.<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-14\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-14\" ><div id=\"pgc-34-14-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-14-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"27\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/fakevsreal-1.png\" width=\"200\" height=\"150\" sizes=\"(max-width: 200px) 100vw, 200px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-14-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-14-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"28\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Fake Image\/Video (DeepFake) Detection (2018-)<\/h1>\n<p><strong>Selected papers and outreach<\/strong><\/p>\n<p><strong>IEEE ICIP 2019<\/strong> and <strong>Applied Sciences<\/strong><\/p>\n<p>Detecting Generated Image Based on Coupled Network with Two-Step Pairwise Learning<\/p>\n<p><strong>IEEE IS3C 2018<\/strong><\/p>\n<p>Learning to Detect Fake Face Images in the Wild<\/p>\n<p>[News] <a href=\"https:\/\/view.ctee.com.tw\/technology\/17461.html\" target=\"_blank\" rel=\"noopener noreferrer\">\u5de5\u5546\u6642\u5831<\/a> \/ <a href=\"https:\/\/smctw.tw\/3352\/\" target=\"_blank\" rel=\"noopener noreferrer\">\u53f0\u5927\u65b0\u8208\u5a92\u9ad4\u4e2d\u5fc3<\/a><\/p>\n<p>[<a href=\"https:\/\/cchsu.info\/?p=138\" target=\"_blank\" rel=\"noopener noreferrer\">Project<\/a>] [<a href=\"https:\/\/arxiv.org\/abs\/1809.08754\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/github.com\/jesse1029\/Fake-Face-Images-Detection-Tensorflow\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>] [<a href=\"http:\/\/divd.cchsu.info\" target=\"_blank\" rel=\"noopener noreferrer\">Online Demo<\/a>]<\/p>\n<p>\u507d\u9020 \/ \u9020\u5047\u7167\u7247\u5075\u6e2c\uff0c\u805a\u7126\u65bc\u53ef\u4fe1\u5a92\u9ad4\u5206\u6790\u8207\u6253\u64ca\u5047\u7167\u7247\u3001\u5047\u65b0\u805e\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-15\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-15\" ><div id=\"pgc-34-15-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-15-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"29\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/flowchart_DCN.png\" width=\"477\" height=\"301\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/flowchart_DCN.png 477w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/flowchart_DCN-300x189.png 300w\" sizes=\"(max-width: 477px) 100vw, 477px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-15-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-15-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"30\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Deep Compressed Sensing for Hyperspectral Images (2020-)<\/h1>\n<p><strong>Selected papers for efficient satellite sensing<\/strong><\/p>\n<p><strong>IEEE Transactions on Geoscience and Remote Sensing<\/strong><\/p>\n<p>DCSN: Deep Compressed Sensing Network for Efficient Hyperspectral Data Transmission of Miniaturized Satellite [<a href=\"https:\/\/ieeexplore.ieee.org\/document\/9257426\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>]<\/p>\n<p><strong>CVGIP 2020<\/strong><\/p>\n<p>Deep Joint Compression and Super-Resolution Low-Rank Network for Fast Hyperspectral Data Transmission<\/p>\n<p>[<a href=\"https:\/\/chihungkao.github.io\/DCSN\/DCSN\" target=\"_blank\" rel=\"noopener noreferrer\">Project<\/a>] [<a href=\"https:\/\/github.com\/jesse1029\/DCSN\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<p>\u4ee5\u6df1\u5ea6\u5b78\u7fd2\u70ba\u57fa\u790e\u4e4b\u9ad8\u5149\u8b5c \/ \u591a\u5149\u8b5c\u5f71\u50cf\u8d85\u89e3\u6790\u5ea6\u8207\u58d3\u7e2e\u611f\u77e5\u6280\u8853\u958b\u767c\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-16\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-16\" ><div id=\"pgc-34-16-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-16-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"31\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous.png\" width=\"2156\" height=\"1444\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous.png 2156w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-300x201.png 300w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-1024x686.png 1024w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-768x514.png 768w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-1536x1029.png 1536w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-2048x1372.png 2048w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-380x254.png 380w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-285x190.png 285w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/autonomous-272x182.png 272w\" sizes=\"(max-width: 2156px) 100vw, 2156px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-16-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-16-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"32\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Decision-Making of Autonomous Vehicles Using Vision Information (2019-)<\/h1>\n<p><strong>Selected work on robust visual decision-making<\/strong><\/p>\n<p><strong>Multimedia Tools and Applications<\/strong><\/p>\n<p>Deep Learning-based Vehicle Trajectory Prediction based on Generative Adversarial Network for Autonomous Driving Applications<\/p>\n<p><strong>IEEE ICCE-TW 2020<\/strong><\/p>\n<p>Learning to Predict Risky Driving Behaviors for Autonomous Driving<\/p>\n<p>[Large-Scale Vehicle Collision Dataset @ TW] [<a href=\"https:\/\/sites.google.com\/view\/tvcd-tw\/\" target=\"_blank\" rel=\"noopener noreferrer\">Link<\/a>]<\/p>\n<p>\u81ea\u99d5\u8eca\u8996\u89ba\u7cfb\u7d71\u4e4b\u5371\u96aa\u99d5\u99db\u884c\u70ba\u9810\u6e2c\u8207\u53f0\u7063\u9053\u8def\u5730\u5340\u8cc7\u6599\u5eab\u5efa\u7f6e\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-17\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-17\" ><div id=\"pgc-34-17-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-17-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"33\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a01.jpg\" width=\"371\" height=\"315\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a01.jpg 371w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a01-300x255.jpg 300w\" sizes=\"(max-width: 371px) 100vw, 371px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-17-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-17-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"34\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Social Media Prediction (2016-)<\/h1>\n<p><strong>Selected outputs and awards<\/strong><\/p>\n<ul>\n<li><strong>ACM Multimedia 2017-2020<\/strong><\/li>\n<li><em>Social Media Prediction Based on Residual Learning and Random Forest<\/em> (2017). See the publication list for newer versions.<\/li>\n<li><span style=\"color: #800000;\">2 Best-Performance Awards and 2 Top-Performance Awards<\/span><\/li>\n<li><span style=\"color: #800000;\">Best Grand Challenge Paper Award (2017)<\/span><\/li>\n<li>[<a href=\"https:\/\/github.com\/jesse1029\/SMHP2018\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>] [<a href=\"https:\/\/dl.acm.org\/citation.cfm?id=3127894\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>]<\/li>\n<\/ul>\n<p>\u9810\u6e2c\u793e\u7fa4\u8cbc\u6587\u9ede\u64ca\u7387\u8207\u9577\u671f\u6d41\u884c\u5ea6\u8b8a\u5316\u3002<\/p><\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-18\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-18\" ><div id=\"pgc-34-18-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-18-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"35\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/SGAN.png\" width=\"1816\" height=\"825\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/SGAN.png 1816w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/SGAN-300x136.png 300w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/SGAN-1024x465.png 1024w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/SGAN-768x349.png 768w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/SGAN-1536x698.png 1536w\" sizes=\"(max-width: 1816px) 100vw, 1816px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-18-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-18-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"36\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Identity-Preserving Face Hallucination (2018-2020)<\/h1>\n<p><strong>SiGAN: Siamese Generative Adversarial Network for Identity-Preserving Face Hallucination<\/strong><\/p>\n<p>Published in <strong>IEEE Transactions on Image Processing (TIP)<\/strong>, 2019.<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/1807.08370\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/github.com\/jesse1029\/SiGAN\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<p>\u9084\u539f\u4e0d\u6e05\u695a\u3001\u6a21\u7cca\u7684\u4f4e\u89e3\u6790\u5ea6\u4eba\u81c9\u7167\u7247\uff0c\u540c\u6642\u4fdd\u7559\u539f\u59cb\u8eab\u5206\u8cc7\u8a0a\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-19\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-19\" ><div id=\"pgc-34-19-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-19-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"37\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a02.png\" width=\"371\" height=\"312\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a02.png 371w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a02-300x252.png 300w\" sizes=\"(max-width: 371px) 100vw, 371px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-19-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-19-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"38\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Large-Scale Image Clustering (2016-2017)<\/h1>\n<p><strong>CNN-Based Joint Clustering and Representation Learning with Feature Drift Compensation for Large-Scale Image Data<\/strong><\/p>\n<p>Published in <strong>TMM 2018<\/strong> and presented at <strong>ICIP 2017<\/strong>.<\/p>\n<p>[<a href=\"https:\/\/arxiv.org\/abs\/1705.07091\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/github.com\/jesse1029\/Large-scale-image-clustering-feature-drifting\" target=\"_blank\" rel=\"noopener noreferrer\">Code<\/a>]<\/p>\n<p>\u5de8\u91cf\u5f71\u50cf\u8cc7\u6599\u5206\u7fa4\u6f14\u7b97\u6cd5\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-20\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-20\" ><div id=\"pgc-34-20-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-20-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"39\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a03.png\" width=\"371\" height=\"303\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a03.png 371w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a03-300x245.png 300w\" sizes=\"(max-width: 371px) 100vw, 371px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-20-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-20-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"40\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Image Deblocking and Super-Resolution (2013-2014)<\/h1>\n<p><strong>Learning-Based Joint Super-Resolution and Deblocking for a Highly Compressed Image<\/strong><\/p>\n<p>Published in <strong>TMM 2015<\/strong> and presented at <strong>MMSP 2013<\/strong>.<\/p>\n<p><strong>MMSP 2013 Top 10% Paper Award<\/strong><\/p>\n<p>[<a href=\"https:\/\/cchsu.info\/Project\/LQSR\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>] [<a href=\"https:\/\/drive.google.com\/file\/d\/0B3-EGmMjT8dqM2JLM3BlUkZNX3c\/view?usp=sharing\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/cchsu.info\/Project\/LQSR\/LQSR_Code_20150319.zip\" target=\"_blank\" rel=\"noopener noreferrer\">Matlab Source Code<\/a> (32-bit only)]<\/p>\n<p>\u540c\u6642\u53bb\u9664\u5340\u584a\u6548\u61c9\u4e26\u63d0\u9ad8\u89e3\u6790\u5ea6\uff0c\u8b93\u653e\u5927\u5f8c\u7684\u5f71\u50cf\u7dad\u6301\u6e05\u6670\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-21\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-21\" ><div id=\"pgc-34-21-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-21-0-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"41\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<p><iframe loading=\"lazy\" width=\"365\" height=\"352\" src=\"https:\/\/www.youtube.com\/embed\/5AdJU6VOBZM\" frameborder=\"0\" allow=\"autoplay; encrypted-media\" allowfullscreen><\/iframe><\/p>\n<\/div>\n<\/div><\/div><\/div><div id=\"pgc-34-21-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-21-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"42\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Super-Resolution of Textured Video (2012-2014)<\/h1>\n<p><strong>Temporally Coherent Super-Resolution of Textured Video via Dynamic Texture Synthesis<\/strong><\/p>\n<p>Published in <strong>IEEE Transactions on Image Processing (TIP)<\/strong> and presented at <strong>MMSP 2014<\/strong>.<\/p>\n<p>[<a href=\"https:\/\/cchsu.info\/Project\/VideoSR\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>] [<a href=\"https:\/\/drive.google.com\/file\/d\/0B5bMFjPkQlkkbi03ZGk5a0hodlE\/view?usp=sharing\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"http:\/\/www.google.com\/url?q=http%3A%2F%2Fcchsu.info%2FProject%2FVideoSR%2FReleased_DTSSR.zip&amp;sa=D&amp;sntz=1&amp;usg=AFQjCNFH0HR8d9gWCJ__4nW_66lXfBuswA\" target=\"_blank\" rel=\"noopener noreferrer\">Matlab Code<\/a>]<\/p>\n<p>\u63d0\u4f9b\u52d5\u614b\u7d0b\u7406\u8996\u8a0a\u7684\u8d85\u89e3\u6790\u5ea6\u6280\u8853\uff0c\u6539\u5584\u653e\u5927\u5f8c\u7684\u7d30\u7bc0\u8207\u6642\u9593\u4e00\u81f4\u6027\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-22\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-22\" ><div id=\"pgc-34-22-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-22-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"43\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a04.jpg\" width=\"150\" height=\"125\" sizes=\"(max-width: 150px) 100vw, 150px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-22-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-22-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"44\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Quality Assessment for Image Retargeting (2011-2013)<\/h1>\n<p><strong>Objective Quality Assessment for Image Retargeting Based on Perceptual Geometric Distortion and Information Loss<\/strong><\/p>\n<p>Published in <strong>IEEE Journal of Selected Topics in Signal Processing<\/strong> and presented at <strong>VCIP 2013<\/strong>.<\/p>\n<p>[<a href=\"https:\/\/cchsu.info\/Project\/IQA\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>] [<a href=\"https:\/\/drive.google.com\/file\/d\/0B5bMFjPkQlkkbGk2NDJXcW1ITjQ\/edit?usp=sharing\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/cchsu.info\/Project\/IQA\/SFMetric_Released_20150327.rar\" target=\"_blank\" rel=\"noopener noreferrer\">Matlab Code<\/a>]<\/p>\n<p>\u8a55\u4f30\u5f71\u50cf\u6fc3\u7e2e\u6280\u8853\u7684\u54c1\u8cea\uff0c\u91cf\u5316\u5e7e\u4f55\u5931\u771f\u8207\u8cc7\u8a0a\u6d41\u5931\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-23\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-23\" ><div id=\"pgc-34-23-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-23-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"45\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a05.png\" width=\"346\" height=\"240\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a05.png 346w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a05-300x208.png 300w\" sizes=\"(max-width: 346px) 100vw, 346px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-23-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-23-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"46\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Super-Resolution (2010-2011)<\/h1>\n<p><strong>Image Super-Resolution via Feature-Based Affine Transform<\/strong><\/p>\n<p>Presented at <strong>MMSP 2011<\/strong>.<\/p>\n<p>[<a href=\"https:\/\/cchsu.info\/Project\/ImageSR\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>] [<a href=\"https:\/\/docs.google.com\/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxudGh1amVzc2V8Z3g6MjhiMDg0MWU3MWUzNWM1Nw\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/cchsu.info\/Project\/ImageSR\/Released_Matlab20150318.rar\" target=\"_blank\" rel=\"noopener noreferrer\">Executable Code (Matlab)<\/a>]<\/p>\n<p><strong>Note.<\/strong> We provide an implementation of NLM with the proposed method as an example.<\/p>\n<p>\u5f71\u50cf\u8d85\u89e3\u6790\u5ea6\u6280\u8853\u4f9d\u8cf4\u65bc\u8cc7\u6599\u5eab\uff0c\u6211\u5011\u63d0\u51fa\u4e00\u7a2e\u65b9\u6cd5\u8c50\u5bcc\u8cc7\u6599\u5eab\u7684\u985e\u578b\uff0c\u63d0\u9ad8\u653e\u5927\u7684\u6548\u679c\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-24\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-24\" ><div id=\"pgc-34-24-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-24-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"47\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/header_fig.png\" width=\"296\" height=\"141\" sizes=\"(max-width: 296px) 100vw, 296px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-24-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-24-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"48\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Face Hallucination (2008-2010)<\/h1>\n<p><strong>Face Hallucination Using Bayesian Global Estimation and Local Basis Selection<\/strong><\/p>\n<p>Presented at <strong>MMSP 2010<\/strong>.<\/p>\n<p>[<a href=\"https:\/\/cchsu.info\/Project\/Hallucination\/\" target=\"_blank\" rel=\"noopener noreferrer\">Project Page<\/a>] [<a href=\"https:\/\/docs.google.com\/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxudGh1amVzc2V8Z3g6Nzk0ZTMwOWE3OTE2MTE3Zg\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/drive.google.com\/file\/d\/0B3-EGmMjT8dqQldQcTZHWTlXamc\/view?usp=sharing\" target=\"_blank\" rel=\"noopener noreferrer\">Matlab Code &amp; Database<\/a>]<\/p>\n<p>\u4eba\u81c9\u8d85\u89e3\u6790\u5ea6\u653e\u5927\uff0c\u5f9e\u6975\u4f4e\u89e3\u6790\u5ea6\u4eba\u81c9\u5f71\u50cf\u91cd\u5efa\u51fa\u8f03\u6e05\u6670\u7684\u4eba\u81c9\u7d50\u679c\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-25\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-25\" ><div id=\"pgc-34-25-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-25-0-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"49\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<p><iframe loading=\"lazy\" width=\"365\" height=\"316\" src=\"https:\/\/www.youtube.com\/embed\/ZwPC-AGrWIw\" frameborder=\"0\" allow=\"autoplay; encrypted-media\" allowfullscreen><\/iframe><\/p>\n<\/div>\n<\/div><\/div><\/div><div id=\"pgc-34-25-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-25-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"50\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Video Forensics (2007-2008)<\/h1>\n<p><strong>Video Forgery Detection Using the Correlation of Noise Residue<\/strong><\/p>\n<p>Presented at <strong>MMSP 2008<\/strong>.<\/p>\n<p><strong>Citations &gt; 100<\/strong><\/p>\n<p>[<a href=\"https:\/\/docs.google.com\/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxudGh1amVzc2V8Z3g6NTZjN2ZkMDZhNTM5ZDdhMA\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"https:\/\/cchsu.info\/Project\/vf_released.zip\" target=\"_blank\" rel=\"noopener noreferrer\">Matlab Code<\/a>] [<a href=\"https:\/\/drive.google.com\/file\/d\/0B3-EGmMjT8dqZzJMZ29zTHZNWjg\/view?usp=sharing\" target=\"_blank\" rel=\"noopener noreferrer\">Database<\/a>]<\/p>\n<p>\u8996\u8a0a\u9451\u8b58\u6280\u8853\uff0c\u805a\u7126\u65bc\u5f71\u7247\u507d\u9020\u5075\u6e2c\u8207\u53ef\u4fe1\u5a92\u9ad4\u5206\u6790\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><div id=\"pg-34-26\"  class=\"panel-grid panel-has-style\" ><div class=\"panel-row-style panel-row-style-for-34-26\" ><div id=\"pgc-34-26-0\"  class=\"panel-grid-cell\" ><div id=\"panel-34-26-0-0\" class=\"so-panel widget widget_sow-image panel-first-child panel-last-child\" data-index=\"51\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-image so-widget-sow-image-default-dbf295114b96-34\"\n\t\t\t\n\t\t>\n<div class=\"sow-image-container\">\n\t\t<img \n\tsrc=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a07.png\" width=\"352\" height=\"275\" srcset=\"https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a07.png 352w, https:\/\/cchsu.info\/wordpress\/wp-content\/uploads\/2020\/12\/a07-300x234.png 300w\" sizes=\"(max-width: 352px) 100vw, 352px\" alt=\"\" \t\tclass=\"so-widget-image\"\/>\n\t<\/div>\n\n<\/div><\/div><\/div><div id=\"pgc-34-26-1\"  class=\"panel-grid-cell\" ><div id=\"panel-34-26-1-0\" class=\"so-panel widget widget_sow-editor panel-first-child panel-last-child\" data-index=\"52\" ><div\n\t\t\t\n\t\t\tclass=\"so-widget-sow-editor so-widget-sow-editor-base\"\n\t\t\t\n\t\t>\n<div class=\"siteorigin-widget-tinymce textwidget\">\n\t<h1>Image Authentication (2006-2007)<\/h1>\n<p><strong>Image Authentication and Tampering Localization Based on Watermark Embedding in the Wavelet Domain<\/strong><\/p>\n<p>Published in <strong>Optical Engineering<\/strong>.<\/p>\n<p>[<a href=\"http:\/\/ieeexplore.ieee.org\/document\/1442247\/\" target=\"_blank\" rel=\"noopener noreferrer\">PDF<\/a>] [<a href=\"http:\/\/cchsu.info\/Project\/WaveletWaterMarking_Released.rar\" target=\"_blank\" rel=\"noopener noreferrer\">Source Code<\/a>]<\/p>\n<p>\u5c07\u6d6e\u6c34\u5370\u85cf\u5165\u5f71\u50cf\u4e2d\uff0c\u4e26\u53ef\u8010\u53d7\u4e0d\u540c\u653b\u64ca\u4ee5\u9032\u884c\u5f71\u50cf\u8a8d\u8b49\u8207\u7ac4\u6539\u5b9a\u4f4d\u3002<\/p>\n<\/div>\n<\/div><\/div><\/div><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Language: English | \u7e41\u9ad4\u4e2d\u6587 Research Vision \u7814\u7a76\u4e3b\u8ef8 Advanced  [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-34","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/pages\/34","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/comments?post=34"}],"version-history":[{"count":61,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/pages\/34\/revisions"}],"predecessor-version":[{"id":1598,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/pages\/34\/revisions\/1598"}],"wp:attachment":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/media?parent=34"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}