My current work lies at the intersection of computer vision, reinforcement learning, and generative modeling, where I study how 2D and 3D representations can be unified to enable robust perception–action coupling. I am particularly interested in structure-aware visual representations that support cross-view understanding, generalization across environments, and interaction-driven learning in embodied settings.
Previously, I explored generative AI for image and video synthesis during my internship at Baidu. I am currently a remote research intern at the Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), supervised by Dr. Xingrui Yu , where I work on generalizable reinforcement learning for embodied agents, with a focus on agent-centric formulations and transferable policies grounded in implicit spatial representations that support generalization across tasks and scenes.
More broadly, my goal is to develop spatially grounded learning frameworks that bridge perception, geometry, and control, advancing the next generation of embodied systems that can reason about and act within complex real-world environments.
Successfully defended my Master's thesis proposal.
Aug 2025
The first paper accepted by the ISPRS Journal of Photogrammetry and Remote Sensing.
[Paper]
Aug 2025
Began a remote research internship at CFAR, A*STAR, supervised by
Dr. Xingrui Yu and
in collaboration with Zhenglin Wan.
Jul 2025
Attended the 2025 Annual Academic Conference on Photogrammetry and Remote Sensing, CSGPC in Kunming, China.
Dec 2024
Began a research internship at Baidu in Shenzhen,
supervised by Dr. Yan Zhang,
exploring frontier text-to-image and text-to-video generation.
Jul 2024
Began collaboration on the SCoDe project under the guidance of Dr. Zimin Xia.
Sep 2023
Enrolled in the Master's program at the State Key Lab. LIESMARS, Wuhan University,
as a recommended exemption student,
under the supervision of Prof. Xianwei Zheng.
Vision-Language-Action (VLA) models achieve strong performance in robotic manipulation, but reinforcement learning (RL) fine-tuning often degrades generalization under spatial distribution shifts. We analyze flow-matching VLA policies and identify the collapse of spatial inductive bias as a key factor limiting robust transfer. To address this, we propose SA-VLA, which explicitly grounds VLA policies in spatial structure by integrating implicit spatial representations, spatially-aware step-level dense rewards, and SCAN, a spatially-conditioned exploration strategy tailored for flow-matching policies. This principled alignment mitigates policy over-specialization and preserves zero-shot generalization to more complex tasks. Experiments on challenging multi-object and cluttered benchmarks demonstrate that SA-VLA enables stable RL fine-tuning and substantially more robust, transferable behaviors.
Scale-aware Co-visible Region Detection for Image Matching
Xu Pan, Zimin Xia, Xianwei Zheng*
ISPRS Journal of Photogrammetry and Remote Sensing h5i 113 · JCR Q1 · IF 12.2 2025
Matching images with significant scale differences remains a persistent challenge in photogrammetry and remote sensing. The scale discrepancy often degrades appearance consistency and introduces uncertainty in keypoint localization. While existing methods address scale variation through scale pyramids or scale-aware training, matching under significant scale differences remains an open challenge. To overcome this, we address the scale difference issue by detecting co-visible regions between image pairs and propose SCoDe (Scale-aware Co-visible region Detector), which both identifies co-visible regions and aligns their scales for highly robust, hierarchical point correspondence matching. Specifically, SCoDe employs a novel Scale Head Attention mechanism to map and correlate features across multiple scale subspaces, and uses a learnable query to aggregate scale-aware information of both images for co-visible region detection. In this way, correspondences can be established in a coarse-to-fine hierarchy, thereby mitigating semantic and localization uncertainties. Extensive experiments on three challenging datasets demonstrate that SCoDe outperforms state-of-the-art methods, improving the precision of a modern local feature matcher by 8.41%. Notably, SCoDe shows a clear advantage when handling images with drastic scale variations.
SleeperVLA: Towards Backdoor-Based Ownership Verification for Vision-Language-Action Models
Ming Sun, Rui Wang, Xingrui Yu, Lihua Jing, Hangyu Du, Zhenglin Wan, Xu Pan, Ivor Tsang
(Under Review) 2026
Vision-Language-Action models (VLAs) support generalist robotic control by enabling end-to-end decision policies directly from multi-modal inputs. As trained VLAs are increasingly shared and adapted, protecting model ownership becomes essential for secure deployment and responsible open-source usage. In this paper, we present SleeperVLA, the first backdoor-based ownership verification framework specifically designed for VLAs. SleeperVLA embeds a stealthy and harmless backdoor watermark into the protected model during training by injecting secret messages into embodied visual data. For post-release verification, we propose a swap-and-detect mechanism, in which a trigger-aware projector and an external classifier head are used to activate and detect the embedded backdoor based on prediction probabilities. Extensive experiments across multiple datasets, model architectures, and adaptation settings demonstrate that SleeperVLA enables reliable and unique ownership verification while preserving benign task performance. Further results show that the embedded watermark remains detectable under post-release model adaptation.
SAMatcher: Co-Visibility Modeling with Segment Anything for Robust Feature Matching
Xu Pan, Qiyuan Ma, Jintao Zhang, He Chen*, Xianwei Zheng*
IEEE Transactions on Geoscience and Remote Sensing (Under Review) h5i 156 · JCR Q1 · IF 8.6 2026
Reliable correspondence estimation is a long-standing problem in computer vision and a critical component of applications such as Structure from Motion, visual localization, and image registration. While recent learning-based approaches have substantially improved local feature descriptiveness, most methods still rely on implicit assumptions about shared visual content across views, leading to brittle behavior when spatial support, semantic context, or visibility patterns diverge between images. We propose SAMatcher, a novel feature matching framework that formulates correspondence estimation through explicit co-visibility modeling. Rather than directly establishing point-wise matching from local appearance, SAMatcher first predicts consistent region masks and bounding boxes within a shared cross-view semantic space, serve as structured priors to guide and regularize correspondence estimation. SAMatcher employs a symmetric cross-view interaction mechanism that treats paired images as interacting token sequences, enabling bidirectional semantic alignment and selective reinforcement of jointly supported regions. Based on this formulation, a reliability-aware supervision strategy jointly constrains region segmentation and geometric localization, enforcing cross-view consistency during training. Extensive experiments on challenging benchmarks demonstrate that SAMatcher significantly improves correspondence robustness under large-scale and viewpoint variations. Beyond quantitative gains, our results indicate that monocular visual foundation models can be systematically extended to multi-view correspondence estimation when co-visibility is explicitly modeled, offering new insights for fusion-based visual understanding.
The Institutional Filter: How Trust Shapes Inequalities Between Domestic and Global AI Models
Jiashen Huang, Xu Pan
Conference of the International Association for Media and Communication Research (Under Review) Communication 2026
Artificial intelligence is increasingly woven into the way people communicate, think, and make decisions. Yet trust in AI does not grow evenly across contexts; it carries traces of national identity, institutional credibility, and emotional attachment. This study examines how institutional trust shapes user trust in domestic (DeepSeek) and global (ChatGPT) large language models (LLMs) in China. Specifically, it distinguishes between cognitive and affective dimensions of trust. Using survey data from 405 participants, we found that higher institutional trust strengthens emotional confidence in domestic AI models, while at low levels of institutional trust, this domestic advantage in perceived competence disappears. By examining the relationship between institutional trust and AI adoption, this study deepens theoretical insights into global communication inequalities in the digital era. The findings suggest that institutional trust operates as a social resource, channeling legitimacy into technological trust, thus contributing to the uneven distribution of trust in AI technologies across different societal groups. The findings offer policy insights for inclusive AI governance and the promotion of global technological equity.
Research on Large-Scale Disparity Image Matching Method Guided by Co-Visible Region
Xu Pan, Xianwei Zheng*
Master's Thesis 2026
Projects
SA-VLA
2026
A research project on robust RL adaptation of flow-matching–based VLA models for robotic manipulation, focusing on generalization under distribution shifts in challenging benchmarks.
Vision-Language-Action Model Robotic Manipulation Flow-Matching Reinforcement Learning
A research project on robust image matching in robot vision, photogrammetry and remote sensing, using explicit co-visibility modeling to handle extreme scale and viewpoint variations.
Co-visibility Image Matching 3D Vision Segmentation Photogrammetry SCoDe SAMatcher
The GNDASystem (Global Natural Disaster Assessment System) is a web-based geographic information system application designed for the analysis and assessment of natural disasters.
Natural Disasters Geographic Information System (GIS)
The I2RSI System (Intelligent Interpretation of Remote Sensing Images) is a web-based application for remote sensing image interpretation, powered by the Baidu PaddlePaddle deep learning framework.
We live in an age tyrannized by efficiency, outcomes, and speed, to the point that nothing lasts and nothing leaves a deep impression. In the midst of noisy bubbles and short-lived hype, I hope to take time to think carefully, to doubt, to refine, and to do research that is genuinely meaningful and worth remembering.