The University of Hong Kong | VAST | Beihang University | Tsinghua University
TEXGen is a feed-forward texture generation model which diffuses albedo texture map directly on the UV domain.
## :rocket: :rocket: :rocket: **News**
- **[2024-12-15]**: Release the inference code.
## Requirements
The training process requires at least one GPU with VRAM bigger than 40GB. We test the whole pipeline using Nvidia A100 gpu. Other GPUs are not tested but may be fine. For testing only, a GPU with 24GB VRAM will be fine.
## Environment
#### Docker Image
For convenience, it is welcomed to use our built-up docker image to run TEXGen.
```shell
docker run -it yuanze1024/texgen_release:v1 bash
```
#### From Scratch
Note that it could be really tricky to build an environment from scratch, so we strongly recommend you to use our docker image. You can also build your environment on your own:
```shell
apt-get install libgl1 libglib2.0-0 libsm6 libxrender1 libxext6 libssl-dev build-essential g++ libboost-all-dev libsparsehash-dev git-core perl libegl1-mesa-dev libgl1-mesa-dev -y
conda create -n texgen python=3.10 -y
conda activate texgen
conda install ninja -y
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit -y
conda install pytorch==2.1.0 torchvision==0.16.0 pytorch-cuda=11.8 -c pytorch -c nvidia -y
conda install h5py pyyaml -c anaconda -y
conda install sharedarray tensorboard tensorboardx yapf addict einops scipy plyfile termcolor timm gxx=11.1.0 lightning -c conda-forge -y
conda install pytorch-cluster pytorch-scatter pytorch-sparse -c pyg -y
pip install -r requirements.txt
```
## Usage
We provide the example testing data in `assets/models`. You can organize your own customized data as below:
```shell
$YOUR_DATA_PATH
├── 34 # which is the first two character of the model id
│ └── 3441609f539b46b38e7ab1213660cf3e # the unique id of a 3D model
│ ├── model.mtl
│ ├── model.obj
│ └── model.png # albedo texture map
```
For the model indices input, see `assets/input_list/test_input.jsonl` for an example, where `result` represents the textual prompt.
#### Inference
For sanity checking, you can run the following code snippet.
```shell
CHECKPOINT_PATH="assets/checkpoints/texgen_v1.ckpt"
# assume single gpu
python launch.py --config configs/texgen_test.yaml --test --gpu 0 data.eval_scene_list="assets/input_list/test_input.jsonl" exp_root_dir=outputs_test name=test tag=test system.weights=$CHECKPOINT_PATH
```
The results will be put in `//@