# PanoDiffusion
**Repository Path**: ItalianSCLov/PanoDiffusion
## Basic Information
- **Project Name**: PanoDiffusion
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: BSD-3-Clause
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-06-21
- **Last Updated**: 2024-06-21
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
PanoDiffusion: 360-degree Panorama Outpainting via Diffusion
Tianhao Wu
·
Chuanxia Zheng
·
Tat-Jen Cham
ICLR 2024
## Setup
### Installation
This code has been tested using python 3.8.5 with torch 1.7.0 & CUDA 11.0 on a V100.
You need to first download the code and our [pretrained model](https://drive.google.com/file/d/1xSL_Qr7VYQRItxPYLw0C7qdcRUr2bhdq/view?usp=drive_link). It should include checkpoints for RGB/Depth VQ model, LDM and RefineNet model.
```
git clone https://github.com/PanoDiffusion/PanoDiffusion.git
cd PanoDiffusion
conda env create -f environment.yml
```
### Play with PanoDiffusion
We have already prepared some images and masks under 'example' folder. To test the model, you can simply run:
```
python inference.py \
--indir PanoDiffusion/example \
--outdir PanoDiffusion/example/output \
--ckpt PanoDiffusion/pretrain_model/ldm/ldm.ckpt \
--config PanoDiffusion/config/outpainting.yaml \
--refinenet_ckpt PanoDiffusion/pretrain_model/refinenet/refinenet.pth.tar
or
bash inference.sh
```
The results will be saved in the 'output' folder. Each time you run the code you will get a new outpainting result.
# Citation
If you find our code or paper useful, please cite our work.
```BibTeX
@inproceedings{wu2023panodiffusion,
title={PanoDiffusion: 360-degree Panorama Outpainting via Diffusion},
author={Wu, Tianhao and Zheng, Chuanxia and Cham, Tat-Jen},
booktitle={The Twelfth International Conference on Learning Representations},
year={2023}
}
```