{"id":1575,"date":"2026-02-26T17:22:23","date_gmt":"2026-02-26T09:22:23","guid":{"rendered":"https:\/\/cchsu.info\/wordpress\/?p=1575"},"modified":"2026-04-07T11:23:04","modified_gmt":"2026-04-07T03:23:04","slug":"news-2-papers-accepted-to-cvpr-2026-main-track-and-1-to-the-findings-workshop","status":"publish","type":"post","link":"https:\/\/cchsu.info\/wordpress\/2026\/02\/26\/news-2-papers-accepted-to-cvpr-2026-main-track-and-1-to-the-findings-workshop\/","title":{"rendered":"[News] Two papers accepted to CVPR 2026 main track and one to CVPR 2026 Findings!"},"content":{"rendered":"<div class=\"news-paper-block\">\n<p>This CVPR 2026 update highlights two major research thrusts in our lab: <strong>Secured &amp; Robust Robotics Vision<\/strong> and <strong>Efficient AI<\/strong>. The accepted papers below reflect our recent work on robust perception under real-world degradations, together with hardware-conscious AI systems that remain practical beyond a single accelerator stack.<\/p>\n<\/div>\n<div class=\"news-paper-block\">\n<h3>PhaSR: Generalized Image Shadow Removal with Physically Aligned Priors<\/h3>\n<p><strong>Focus:<\/strong> Robust Vision Perception \/ Image Restoration<\/p>\n<p>PhaSR addresses shadow removal under complex, multi-source illumination where conventional single-light assumptions often fail. By combining closed-form illumination correction with geometry-semantic alignment, the method improves robustness under ambient lighting conditions and pushes restoration toward more trustworthy real-world perception.<\/p>\n<p class=\"news-links\">[<a href=\"https:\/\/arxiv.org\/abs\/2601.17470\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/ming053l\/PhaSR\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<p>Congratulations to Chia-Ming Lee, Yu-Fan Lin, Yu-Jou Hsiao, Jin-Hui Jiang, Yu-Lun Liu, and collaborators.<\/p>\n<\/div>\n<div class=\"news-paper-block\">\n<h3>ReflexSplit: Single Image Reflection Separation via Layer Fusion-Separation<\/h3>\n<p><strong>Focus:<\/strong> Robust Vision Perception \/ Image Restoration<\/p>\n<p>ReflexSplit targets single-image reflection separation, where transmission and reflection layers are nonlinearly entangled and difficult to disentangle in deep decoders. Its dual-stream design, together with Layer Fusion-Separation Blocks and cross-scale gated fusion, improves robustness on both synthetic and real-world benchmarks.<\/p>\n<p class=\"news-links\">[<a href=\"https:\/\/arxiv.org\/abs\/2601.17468\" target=\"_blank\" rel=\"noopener noreferrer\">arXiv<\/a>] [<a href=\"https:\/\/github.com\/wuw2135\/ReflexSplit\" target=\"_blank\" rel=\"noopener noreferrer\">GitHub<\/a>]<\/p>\n<p>Congratulations to Chia-Ming Lee, Yu-Fan Lin, Jin-Hui Jiang, Yu-Jou Hsiao, Chih-Chung Hsu, Yu-Lun Liu, and collaborators.<\/p>\n<\/div>\n<div class=\"news-paper-block\">\n<h3>ELSA: Exact Linear-Scan Attention for Fast and Memory-Light Vision Transformers<\/h3>\n<p><strong>Focus:<\/strong> Efficient AI \/ Hardware-Agnostic Inference<\/p>\n<p>ELSA rethinks softmax attention from a systems perspective and turns it into a prefix-scan problem that preserves exact semantics while lowering memory overhead. The result is a drop-in FP32 attention alternative that works across both high-end and edge hardware, strengthening our lab&#8217;s direction toward deployable and trustworthy AI systems.<\/p>\n<p class=\"news-links\">Accepted to CVPR 2026 Findings (CVPRF)<\/p>\n<p>Congratulations to Wo-Ting Liao, Xin-Di Ma, Chia-Ming Lee, and all collaborators for this strong systems-and-vision result.<\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>This CVPR 2026 update highlights two major research thr [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-1575","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/posts\/1575","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/comments?post=1575"}],"version-history":[{"count":5,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/posts\/1575\/revisions"}],"predecessor-version":[{"id":1717,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/posts\/1575\/revisions\/1717"}],"wp:attachment":[{"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/media?parent=1575"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/categories?post=1575"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cchsu.info\/wordpress\/wp-json\/wp\/v2\/tags?post=1575"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}