# UDIS2 **Repository Path**: lulululala/UDIS2 ## Basic Information - **Project Name**: UDIS2 - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2023-10-27 - **Last Updated**: 2023-10-27 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README #

Parallax-Tolerant Unsupervised Deep Image Stitching (UDIS++ [paper](https://arxiv.org/abs/2302.08207))

Lang Nie*, Chunyu Lin*, Kang Liao*, Shuaicheng Liu`, Yao Zhao*

* Institute of Information Science, Beijing Jiaotong University

` School of Information and Communication Engineering, University of Electronic Science and Technology of China

![image](https://github.com/nie-lang/UDIS2/blob/main/fig1.png) ## Dataset (UDIS-D) We use the UDIS-D dataset to train and evaluate our method. Please refer to [UDIS](https://github.com/nie-lang/UnsupervisedDeepImageStitching) for more details about this dataset. ## Code #### Requirement * numpy 1.19.5 * pytorch 1.7.1 * scikit-image 0.15.0 * tensorboard 2.9.0 We implement this work with Ubuntu, 3090Ti, and CUDA11. Refer to [environment.yml](https://github.com/nie-lang/UDIS2/blob/main/environment.yml) for more details. #### How to run it Similar to UDIS, we also implement this solution in two stages: * Stage 1 (unsupervised warp): please refer to [Warp/readme.md](https://github.com/nie-lang/UDIS2/blob/main/Warp/readme.md). * Stage 2 (unsupervised composition): please refer to [Composition/readme.md](https://github.com/nie-lang/UDIS2/blob/main/Composition/readme.md). ## Meta If you have any questions about this project, please feel free to drop me an email. NIE Lang -- nielang@bjtu.edu.cn ``` @inproceedings{nie2023parallax, title={Parallax-Tolerant Unsupervised Deep Image Stitching}, author={Nie, Lang and Lin, Chunyu and Liao, Kang and Liu, Shuaicheng and Zhao, Yao}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={7399--7408}, year={2023} } ``` ## References [1] L. Nie, C. Lin, K. Liao, M. Liu, and Y. Zhao, “A view-free image stitching network based on global homography,” Journal of Visual Communication and Image Representation, p. 102950, 2020. [2] L. Nie, C. Lin, K. Liao, and Y. Zhao. Learning edge-preserved image stitching from multi-scale deep homography[J]. Neurocomputing, 2022, 491: 533-543. [3] L. Nie, C. Lin, K. Liao, S. Liu, and Y. Zhao. Unsupervised deep image stitching: Reconstructing stitched features to images[J]. IEEE Transactions on Image Processing, 2021, 30: 6184-6197. [4] L. Nie, C. Lin, K. Liao, S. Liu, and Y. Zhao. Deep rectangling for image stitching: a learning baseline[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 5740-5748.