My current work lies at the intersection of computer vision, reinforcement learning, and generative modeling,
where I study how 2D and 3D representations can be unified to enable robust perception–action coupling.
I am particularly interested in structure-aware visual representations that support cross-view understanding, generalization across environments, and interaction-driven learning in embodied settings.
Previously, I explored generative AI for image and video synthesis during my internship at
Baidu.
I am currently a remote research intern at the
Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR),
supervised by
Dr. Xingrui Yu,
where I work on generalizable reinforcement learning for embodied agents, with a focus on agent-centric formulations and transferable policies grounded in implicit spatial representations that support generalization across tasks and scenes.
More broadly, my goal is to develop spatially grounded learning frameworks that bridge perception, geometry, and control,
advancing the next generation of embodied systems that can reason about and act within complex real-world environments.
Find me on
News
Nov 2025
Completed the proposal defense for my Master’s thesis.
Aug 2025
First paper accepted by ISPRS Journal of Photogrammetry and Remote Sensing.
[Paper]
Aug 2025
Started a remote research internship at CFAR, A*STAR, supervised by
Dr. Xingrui Yu,
in collaboration with Zhenglin Wan.
Jul 2025
Attended the 2025 Annual Academic Conference on Photogrammetry and Remote Sensing, CSGPC in Kunming.
Dec 2024
Started a research internship at Baidu in Shenzhen,
supervised by Dr. Yan Zhang,
exploring SOTA text-to-image and text-to-video generation.
Matching images with significant scale differences remains a persistent challenge in photogrammetry and remote sensing. The scale discrepancy often degrades appearance consistency and introduces uncertainty in keypoint localization. While existing methods address scale variation through scale pyramids or scale-aware training, matching under significant scale differences remains an open challenge. To overcome this, we address the scale difference issue by detecting co-visible regions between image pairs and propose SCoDe (Scale-aware Co-visible region Detector), which both identifies co-visible regions and aligns their scales for highly robust, hierarchical point correspondence matching. Specifically, SCoDe employs a novel Scale Head Attention mechanism to map and correlate features across multiple scale subspaces, and uses a learnable query to aggregate scale-aware information of both images for co-visible region detection. In this way, correspondences can be established in a coarse-to-fine hierarchy, thereby mitigating semantic and localization uncertainties. Extensive experiments on three challenging datasets demonstrate that SCoDe outperforms state-of-the-art methods, improving the precision of a modern local feature matcher by 8.41%. Notably, SCoDe shows a clear advantage when handling images with drastic scale variations.
SAMatcher: Co-Visibility Modeling with Segment Anything for Robust Feature Matching
Xu Pan, Qiyuan Ma, Jintao Zhang, Xianwei Zheng*
IEEE Transactions on Geoscience and Remote Sensing (In Preparation) 2026
Reliable correspondence estimation is a long-standing problem in computer vision and a critical component of applications such as Structure from Motion, visual localization, and image registration. While recent learning-based approaches have substantially improved local feature descriptiveness, most methods still rely on implicit assumptions about shared visual content across views, leading to brittle behavior when spatial support, semantic context, or visibility patterns diverge between images. We propose SAMatcher, a novel feature matching framework that formulates correspondence estimation through explicit co-visibility modeling. Rather than directly establishing point-wise matching from local appearance, SAMatcher first predicts consistent region masks and bounding boxes within a shared cross-view semantic space, serve as structured priors to guide and regularize correspondence estimation. SAMatcher employs a symmetric cross-view interaction mechanism that treats paired images as interacting token sequences, enabling bidirectional semantic alignment and selective reinforcement of jointly supported regions. Based on this formulation, a reliability-aware supervision strategy jointly constrains region segmentation and geometric localization, enforcing cross-view consistency during training. Extensive experiments on challenging benchmarks demonstrate that SAMatcher significantly improves correspondence robustness under large-scale and viewpoint variations. Beyond quantitative gains, our results indicate that monocular visual foundation models can be systematically extended to multi-view correspondence estimation when co-visibility is explicitly modeled, offering new insights for fusion-based visual understanding.
Spatially-Aware Reinforcement Learning for Flow-Matching Vision-Language-Action Models
Xu Pan, Zhenglin Wan, Xingrui Yu*
(In Preparation) 2026
Research on Large-Scale Disparity Image Matching Method Guided by Co-Visible Region
Xu Pan, Xianwei Zheng*
Master's Thesis 2026
The Institutional Filter: How Trust Shapes Inequalities Between Domestic and Global AI Models
Jiashen Huang, Xu Pan
(Under Review) 2026
Artificial intelligence is increasingly woven into the way people communicate, think, and make decisions. Yet trust in AI does not grow evenly across contexts; it carries traces of national identity, institutional credibility, and emotional attachment. This study examines how institutional trust shapes user trust in domestic (DeepSeek) and global (ChatGPT) large language models (LLMs) in China. Specifically, it distinguishes between cognitive and affective dimensions of trust. Using survey data from 405 participants, we found that higher institutional trust strengthens emotional confidence in domestic AI models, while at low levels of institutional trust, this domestic advantage in perceived competence disappears. By examining the relationship between institutional trust and AI adoption, this study deepens theoretical insights into global communication inequalities in the digital era. The findings suggest that institutional trust operates as a social resource, channeling legitimacy into technological trust, thus contributing to the uneven distribution of trust in AI technologies across different societal groups. The findings offer policy insights for inclusive AI governance and the promotion of global technological equity.
Personal Philosophy
I follow Stoic philosophy. Life is a joyful ascent: a true mountaineer delights in the climb
itself, not just the summit.
“Thou sufferest this justly: for thou choosest rather to become good to-morrow than to be
good to-day.”
We live in an age tyrannized by efficiency, outcomes, and speed, to the point that nothing
lasts and nothing leaves a deep impression. In the midst of noisy bubbles and short-lived
hype, I hope to take time to think carefully, to doubt, to refine, and to do research that
is genuinely meaningful and worth remembering.