Hyperspectral and Multispectral Image Fusion with Arbitrary Resolution Through Self-Supervised Representations
Abstract
The fusion of a low-resolution hyperspectral image (LR-HSI) with a high-resolution multispectral image (HR-MSI) has emerged as an effective technique for achieving HSI super-resolution (SR). Previous studies have mainly concentrated on estimating the posterior distribution of the latent high-resolution hyperspectral image (HR-HSI), leveraging an appropriate image prior and likelihood computed from the discrepancy between the latent HSI and observed images. Low rankness stands out for preserving latent HSI characteristics through matrix factorization among the various priors. However, a key limitation in previous studies is the lack of generalization in fusion models with fixed resolution scales, which require retraining whenever higher output resolutions are needed. To overcome this limitation, we propose a novel continuous low-rank factorization (CLoRF) by integrating two neural representations into the matrix factorization, capturing spatial and spectral information, respectively. This approach harnesses both the low rankness from the matrix factorization and the continuity from neural representation in a self-supervised manner. By adhering to the inherently continuous nature of the underlying hyperspectral image, CLoRF recovers this data in continuous form, enabling the subsequent generation of discrete hyperspectral images at arbitrarily higher spatial or spectral resolutions. Theoretically, we prove the low-rank property and Lipschitz continuity in the proposed continuous low-rank factorization. Experimentally, our method significantly surpasses existing techniques and achieves user-desired resolutions without the need for neural network retraining. Code is available at https://github.com/wangting1907/CLoRF-Fusion.
References
Wang, T., Yan, Z., Li, J., Zhao, X., Wang, C., & Ng, M. (2025). Hyperspectral and Multispectral Image Fusion with Arbitrary Resolution Through Self-Supervised Representations: T. Wang et al. International Journal of Computer Vision, 1-21. https://link.springer.com/article/10.1007/s11263-025-02540-1