- Installation and running instructions
 - Datasets
 - Training
 - Inference code (FID/LPIPS)
 - Pre-trained checkpoints
 - Editing
 
We recommend using conda to manage the environment. To create a new environment, run:
conda create -n tuvf python=3.9.16 -y
conda activate tuvfThen install other dependencies:
# We prefer to install pytorch at first.
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
# Then install pytorch3d. This can take several minutes. Make sure that your compilation CUDA version and runtime CUDA version match.
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
# Install other dependencies.
pip install -r requirements.txtFinally, change the python_bin setting in the configs/env file to match the location of your Python installation. If you're using Conda as mentioned earlier, the path will probably look something like this: /home/USER/miniconda3/envs/tuvf/bin/python.
We followTexturify's dataset format. To download and extract the files for our dataset, please execute the following command. Please ensure that you have the pigz tool installed as it is required for file extraction.
mkdir CADTextures
cd CADTextures
# CompCars
wget https://huggingface.co/datasets/a8cheng/TUVF/resolve/main/CompCars.tar.gz
tar -I pigz -xvf CompCars.tar.gz -C ./
# PhotoShape Straight
wget https://huggingface.co/datasets/a8cheng/TUVF/resolve/main/Photoshape.tar.gz
tar -I pigz -xvf CompCars.tar.gz -C ./They should follow the folder structure as below:
├── CADTextures
│   ├── CompCars
│       ├── exemplars_highres
│       ├── exemplars_highres_mask
│       ├── filelist
│       ├── pretrain
│       ├── shapenet_psr
│   ├── Photoshape
│       ├── straight
│       ├── straight_mask
│       ├── filelist
│       ├── pretrain
│       ├── shapenet_psr
To launch training, run:
# train TUVF on CompCars
python src/infra/launch.py dataset=compcars dataset.resolution=512 num_gpus=8 training=p128_60_5000 training.batch_size=160 model=canograf exp_suffix=WHATEVER_NAME
# train TUVF on Photoshape
python src/infra/launch.py dataset=photoshape dataset.resolution=512 num_gpus=8 training=p128_60_5000 training.batch_size=160 model=canograf exp_suffix=WHATEVER_NAMETo configure the dataset path, you have two options:
- Modify the dataset path directly in the 
configs/dataset/DATASET_NAMEfile: 
path: /YOUR_PATH/CADTextures/DATASET_NAME/- Alternatively, you can specify the dataset path using a command-line argument:
 
dataset.path=YOUR_PATH/CADTextures/DATASET_NAMETo run the evaluation for a checkpoint, you can use the following command. The script performs the following steps: preprocesses real samples, generates synthesized views, and evaluates FID and KID. It then proceeds to evaluate LPIPS_g and LPIPS_t.
# test TUVF on CompCars
python scripts/evaluate_cars.py ckpt.network_pkl=YOURPATH/CHECKPOINT.pkl dataset_path=YOURPATH/CADTextures/CompCars  output_dir=YOURPATH
# test TUVF on Photoshape
python scripts/evaluate_chairs.py ckpt.network_pkl=YOURPATH/CHECKPOINT.pkl dataset_path=YOURPATH/CADTextures/Photoshape  output_dir=YOURPATHYou can also use the following argument to evaluate from our checkpoints
# CompCars checkpoint
ckpt.network_pkl=https://huggingface.co/datasets/a8cheng/TUVF/resolve/main/checkpoints/cars.pkl
# Photoshape checkpoint
ckpt.network_pkl=https://huggingface.co/datasets/a8cheng/TUVF/resolve/main/checkpoints/chairs.pklpython scripts/finetune.py  demo_dir=/home/anjie/Downloads/CADTextures/finetune_demo dataset_path=/home/anjie/Downloads/CADTextures/CompCarsWe have used code snippets from different repositories, including the following repositories: EpiGRAF, NeuMesh, Texturify, ShapeAsPoints. We would like to acknowledge and thank the authors of these repositories for their excellent work.
@inproceedings{cheng2023tuvf,
    title     = {TUVF: Learning Generalizable Texture UV Radiance Fields},
    author    = {Cheng, An-Chieh and Li, Xueting and Liu, Sifei and Wang, Xiaolong},
    booktitle = {ICLR},
    year      = {2024}
}
