[News] Two papers accepted to CVPR 2026 main track and one to CVPR 2026 Findings!

2 月 26, 2026

This CVPR 2026 update highlights two major research thrusts in our lab: Secured & Robust Robotics Vision and Efficient AI. The accepted papers below reflect our recent work on robust perception under real-world degradations, together with hardware-conscious AI systems that remain practical beyond a single accelerator stack.

PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors

Focus: Robust Vision Perception / Image Restoration

PhaSR addresses shadow removal under complex, multi-source illumination where conventional single-light assumptions often fail. By combining closed-form illumination correction with geometry-semantic alignment, the method improves robustness under ambient lighting conditions and pushes restoration toward more trustworthy real-world perception.

Congratulations to Chia-Ming Lee, Yu-Fan Lin, Yu-Jou Hsiao, Jin-Hui Jiang, Yu-Lun Liu, and collaborators.

ReflexSplit: Single Image Reflection Separation via Layer Fusion-Separation

Focus: Robust Vision Perception / Image Restoration

ReflexSplit targets single-image reflection separation, where transmission and reflection layers are nonlinearly entangled and difficult to disentangle in deep decoders. Its dual-stream design, together with Layer Fusion-Separation Blocks and cross-scale gated fusion, improves robustness on both synthetic and real-world benchmarks.

Congratulations to Chia-Ming Lee, Yu-Fan Lin, Jin-Hui Jiang, Yu-Jou Hsiao, Chih-Chung Hsu, Yu-Lun Liu, and collaborators.

ELSA: Exact Linear-Scan Attention for Fast and Memory-Light Vision Transformers

Focus: Efficient AI / Hardware-Agnostic Inference

ELSA rethinks softmax attention from a systems perspective and turns it into a prefix-scan problem that preserves exact semantics while lowering memory overhead. The result is a drop-in FP32 attention alternative that works across both high-end and edge hardware, strengthening our lab’s direction toward deployable and trustworthy AI systems.

Congratulations to Wo-Ting Liao, Xin-Di Ma, Chia-Ming Lee, and all collaborators for this strong systems-and-vision result.

By Jess