Philipp Langsteiner, Jan-Niklas Dihlmann, Hendrik P.A. Lensch
University of Tübingen
Demo video: https://github.com/user-attachments/assets/95c2114e-e33a-46ed-933a-3a2ec153f6ca
This is the official code for MatSpray, a framework for fusing 2D material world knowledge from diffusion models into 3D Gaussian Splatting geometry to obtain relightable assets with physically based materials. Our method leverages pretrained 2D diffusion-based material predictors to generate per-view material maps (base color, roughness, metallic) and integrates them into a 3D Gaussian representation via Gaussian ray tracing. A lightweight Neural Merger refines the estimates for multi-view consistency and physical accuracy, enabling high-quality relightable 3D reconstruction.
This codebase is built on top of Relightable 3D Gaussian by Gao et al. (ECCV 2024), with the OptiX ray-tracing renderer adapted from the OptiX 7 Course by Ingo Wald, and parts of the deferred rendering pipeline taken from SSS GS by Dihlmann et al. We thank the authors of these projects for making their code available.
- System Requirements
- Installation
- Dataset Preparation
- Rendering Ground Truth with Blender
- Environment Variables Reference
- Training Scripts Guide
- Running the Pipeline
- Evaluation
- Relighting and Composition
- GUI
- Verification Checklist
- Troubleshooting
- Citation
- GPU: NVIDIA GPU with compute capability >= 7.0 (RTX 2070 or newer; tested on RTX 3090 24 GB)
- RAM: 32 GB+ recommended
- Disk: ~50 GB free for datasets + outputs
| Dependency | Version | Notes |
|---|---|---|
| Linux | Ubuntu 20.04+ | Tested on Ubuntu 22.04 |
| NVIDIA Driver | >= 525 | Must support CUDA 11.8 |
| CUDA Toolkit | 11.8 | Download |
| Python | 3.10 | Other 3.x versions may work but are untested |
| CMake | >= 3.26 | Required for OptiX build |
| GCC / G++ | 11 or 12 | C++17 support required |
Install system packages (Ubuntu/Debian):
sudo apt update
sudo apt install -y git cmake ninja-build build-essential libglfw3-dev \
python3-dev python3-venv bcgit clone <repo-url>
cd MA_Philipp_LangsteinerAll commands below assume you are in the repository root.
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip setuptools wheelAdd
source /absolute/path/to/MA_Philipp_Langsteiner/venv/bin/activateto your.bashrcso it activates automatically.
Install PyTorch with CUDA 11.8 support:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118Verify:
python -c "import torch; print(f'PyTorch {torch.__version__}, CUDA: {torch.cuda.is_available()}')"For other CUDA versions see PyTorch Get Started.
pip install -r requirements.txtThis installs all needed packages:
| Package | Purpose |
|---|---|
torch_scatter |
Scatter operations for 3DGS |
kornia |
Differentiable CV ops, depth-to-normal |
OpenEXR / pyexr |
EXR image I/O for HDR data |
lpips / scikit-image |
Perceptual quality metrics (LPIPS, PSNR, SSIM) |
dearpygui |
GUI visualization |
tensorboard |
Training monitoring |
numpy, scipy, opencv-python, pillow, matplotlib, plyfile, imageio, tqdm |
General utilities |
First, clone nvdiffrast (not bundled in this repo):
git clone https://github.com/NVlabs/nvdiffrast.gitThen build and install all CUDA extensions from the repository root:
pip install ./submodules/simple-knn
pip install ./bvh
pip install ./r3dg-rasterization
pip install ./nvdiffrast
pip install ./gs_sss_rasterizationIf builds fail, ensure
CUDA_HOMEis set:export CUDA_HOME=/usr/local/cuda
The Optix/ directory contains a custom OptiX 7 ray-tracing renderer (SampleRenderer) used for intersection tracing during training and relighting. It requires three external dependencies: the OptiX SDK, the Slang shader compiler, and nanobind.
- Download OptiX 7.4.0 from NVIDIA OptiX (requires NVIDIA developer account).
- Run the self-extracting installer:
chmod +x NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64.sh
./NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64.sh --prefix=$HOME/optix- Set the environment variable (add to
.bashrc):
export OptiX_INSTALL_DIR=$HOME/optix/NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64- Verify:
ls $OptiX_INSTALL_DIR/include/optix.h- Download Slang v2025.2.2 from GitHub Releases:
wget https://github.com/shader-slang/slang/releases/download/v2025.2.2/slang-2025.2.2-linux-x86_64.tar.gz
mkdir -p $HOME/slang
tar -xzf slang-2025.2.2-linux-x86_64.tar.gz -C $HOME/slang- Set the environment variable (add to
.bashrc):
export SLANG_DIR=$HOME/slang/slang-2025.2.2-linux-x86_64- Verify:
$SLANG_DIR/bin/slangc --versionpip install nanobind scikit-build-corecd Optix
mkdir -p build && cd build
cmake .. -DOptiX_INSTALL_DIR=$OptiX_INSTALL_DIR -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
cd ../..The built SampleRenderer*.so module lands in Optix/build/. The codebase automatically adds it to sys.path at runtime.
python -c "import sys; sys.path.insert(0, 'Optix/build'); import SampleRenderer; print('OK: SampleRenderer loaded')"Blender renders ground truth material maps (base color, normals, roughness, metallic, depth) from .blend scene files.
# Download and extract (example for Blender 4.0)
wget https://download.blender.org/release/Blender4.0/blender-4.0.0-linux-x64.tar.xz
tar -xJf blender-4.0.0-linux-x64.tar.xz -C $HOME/
export BLENDER_BIN=$HOME/blender-4.0.0-linux-x64/blenderOr if Blender is installed system-wide, just set export BLENDER_BIN=blender.
sudo apt install -y colmapOr build from source: COLMAP Installation.
These are only needed for the material/depth estimation stages (--run-rgb2x, --run-marigold, --run-diffusion-renderer):
| Repository | Env Variable | Purpose |
|---|---|---|
| RGB2X | RGB2X_DIR |
Material property prediction from RGB |
| DiffusionRenderer | DIFFUSION_RENDERER_DIR |
SVD-based material decomposition |
| Marigold | MARIGOLD_DIR |
Monocular depth estimation |
| gaussian-splatting | GS_METHOD_DIR |
Base 3DGS (only for the COLMAP external-GS path) |
Clone each and set its environment variable. Each has its own installation instructions.
Important: Both DiffusionRenderer and Gaussian Splatting require modifications to work with MatSpray. We provide ready-made patch files in the
patches/directory. See the sections below andpatches/README.mdfor details.
DiffusionRenderer is used to generate per-view material maps (base color, roughness, metallic, normals) from input images. These material maps serve as supervision for the training pipeline. You must run DiffusionRenderer before starting training (via the --run-diffusion-renderer flag or manually) so that the material maps are available in the dataset directory.
WARNING: Use the SVD version of DiffusionRenderer only. The COSMOS version does NOT work and will produce incorrect material predictions. Make sure you check out and set up the SVD-based variant.
Setup:
- Clone the DiffusionRenderer repository at the correct commit and apply our patches:
git clone https://github.com/nv-tlabs/diffusion-renderer.git "$DIFFUSION_RENDERER_DIR"
cd "$DIFFUSION_RENDERER_DIR"
git checkout 0a6d71d69d81cc5d9ab9a5599dee4216cb3fc237
# Apply MatSpray modifications to existing files
git apply /path/to/matspray/patches/diffusion-renderer-svd-modifications.patch
# Create the improved inference script used by our training pipeline
cp inference_svd_xrgb.py inference_svd_rgbx_improved.py
patch inference_svd_rgbx_improved.py < /path/to/matspray/patches/diffusion-renderer-inference-improved.patch- Create a separate virtual environment for DiffusionRenderer (it has its own dependencies):
python3 -m venv venv_dr
source venv_dr/bin/activate
pip install -r requirements.txt
# Follow the repo's own installation instructions for model weights etc.
deactivate- Set the environment variable:
export DIFFUSION_RENDERER_DIR=/path/to/diffusion-renderer- The training scripts call DiffusionRenderer automatically when
--run-diffusion-rendereris passed. It runsinference_svd_rgbx_improved.pywith these defaults:
python inference_svd_rgbx_improved.py \
--config configs/rgbx_inference.yaml \
inference_input_dir="<dataset>/base/train/images_bg/" \
inference_save_dir="<dataset>/base/train/" \
inference_n_frames=20 \
inference_n_steps=20 \
model_passes="['basecolor','normal','metallic','roughness']" \
inference_res="[512,512]" \
chunk_mode="all"The output material maps are written into the dataset's train/ directory (e.g., train/diffusionrenderer_baseColor/, train/diffusionrenderer_roughness/, etc.) and are picked up automatically by train.py during the supervision stages.
If you use scripts that run external 3D Gaussian Splatting (e.g., run_gaussian_splatting_colmap.sh), you need to clone and patch the original repository:
git clone git@github.com:graphdeco-inria/gaussian-splatting.git "$GS_METHOD_DIR"
cd "$GS_METHOD_DIR"
git checkout 54c035f7834b564019656c3e3fcc3646292f727d
# Apply MatSpray modifications
git apply /path/to/matspray/patches/gaussian-splatting-modifications.patchThe patch makes the following changes:
- Masking fix (
train.py): Alpha masks are applied to the ground-truth image (composited against the background) instead of zeroing out the rendered output. This prevents floaters in transparent regions. - Rasterizer compatibility (
gaussian_renderer/__init__.py): Gracefully handles different rasterizer API versions. - PLY fallback (
scene/__init__.py): Creates an input PLY from in-memory point clouds when no PLY file exists on disk (e.g., Blender datasets).
WARNING — Masking during external 3DGS training: If you use any external Gaussian Splatting implementation, make sure masks are applied to the ground-truth image only, not to the 3DGS rendered output. Masking the rendered output causes floaters. Our patch handles this correctly.
Download from Google Drive (provided by NeRF).
Expected structure:
<ROOT_DIR>/ # e.g., /data/datasets/nerf_synthetic/
├── lego/
│ ├── base/
│ │ ├── train/
│ │ │ ├── images/ # training RGB images
│ │ │ └── images_bg/ # images with background (after Blender rendering)
│ │ ├── test/
│ │ │ └── images/
│ │ ├── transforms_train.json
│ │ └── transforms_test.json
│ ├── blend_files/
│ │ └── lego.blend # Blender scene for GT rendering
│ └── env_maps/
│ ├── envmap3.exr
│ ├── envmap6.exr
│ ├── envmap12.exr
│ └── envmap24.exr
├── chair/
├── drums/
├── ...
Use script/navi_dataset_prep.bash to prepare from raw Navi data:
NAVI_RAW_DIR=/path/to/navi/v1.2/ \
NAVI_DATASET_DIR=/path/to/datasets/navi/ \
bash script/navi_dataset_prep.bashThis runs: image downsampling + masking -> COLMAP feature extraction -> matching -> mapping.
Expected structure after preparation:
<ROOT_DIR>/ # e.g., /data/datasets/navi/
├── <object_name>/
│ └── base/
│ ├── images/ # input RGB images
│ ├── images_bg/ # background-masked images
│ ├── sparse/0/ # COLMAP sparse reconstruction
│ │ ├── cameras.bin
│ │ ├── images.bin
│ │ └── points3D.bin
│ ├── cameras.json
│ └── points3d.ply
├── blend_files/
│ └── <object_name>.blend
└── env_maps/
Download from here:
datasets/neilfpp/data_dtu/
├── DTU_scan24/
│ └── inputs/
│ ├── depths/
│ ├── images/
│ ├── model/
│ ├── normals/
│ ├── pmasks/
│ └── sfm_scene.json
├── ...
Download from here. Same structure as DTU under datasets/neilfpp/data_tnt/.
Download from Google Drive (provided by InvRender).
For multi-object composition, download the ground plane PLY from here and place it at ./point/ground.ply.
The Blender scripts in blender_scripts/ render ground truth material maps from .blend scene files. This step is needed when you want to train with GT supervision (the --render-gts flag in training scripts).
$BLENDER_BIN --background <dataset>/blend_files/<object>.blend \
--python blender_scripts/blender_render_script_train.py \
-- --object_name <object> --output_dir <dataset>/$BLENDER_BIN --background <dataset>/blend_files/<object>.blend \
--python blender_scripts/blender_render_script_test.py \
-- --env_dir <dataset>/env_maps/ --results_path <dataset>/<object><object>/base/
├── train/
│ ├── images/ # rendered RGB
│ ├── images_bg/ # with background
│ ├── base_basecolor/ # base color AOV
│ ├── base_metallicroughness/ # metallic + roughness AOV
│ ├── base_normal/ # camera-space normals
│ ├── base_depth/ # depth maps
│ └── transforms_train.json
└── test/
├── images/
└── ... (same AOVs as train)
For running Blender without a display (e.g., on a server):
export PYOPENGL_PLATFORM=egl
export CYCLES_CUDA_EXTRA_CFLAGS="-I${CUDA_HOME:-/usr/local/cuda}/include"All configurable paths use environment variables with sensible defaults. Set them before running any script.
| Variable | Description | Example |
|---|---|---|
ROOT_DIR |
Dataset root directory | /data/datasets/nerf_synthetic/ |
OUTPUT_DIR |
Output directory for results | /data/results/nerf_synthetic/ |
CUDA_HOME |
CUDA Toolkit path | /usr/local/cuda |
OptiX_INSTALL_DIR |
OptiX SDK path | $HOME/optix/NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64 |
SLANG_DIR |
Slang compiler path | $HOME/slang/slang-2025.2.2-linux-x86_64 |
| Variable | Description | Default |
|---|---|---|
BLENDER_BIN |
Blender binary | blender |
CUDA_VISIBLE_DEVICES |
GPU selection | all GPUs |
RGB2X_DIR |
RGB2X repo path | /path/to/external/rgbx |
MARIGOLD_DIR |
Marigold repo path | /path/to/external/Marigold |
DIFFUSION_RENDERER_DIR |
diffusion-renderer repo path | /path/to/external/diffusion-renderer |
GS_METHOD_DIR |
gaussian-splatting repo path | /path/to/external/gaussian-splatting |
RESULTS_DIR |
Used by evaluation/ scripts |
/path/to/results/ |
# CUDA
export CUDA_HOME=/usr/local/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
# OptiX + Slang
export OptiX_INSTALL_DIR=$HOME/optix/NVIDIA-OptiX-SDK-7.4.0-linux64-x86_64
export SLANG_DIR=$HOME/slang/slang-2025.2.2-linux-x86_64
export PATH=$SLANG_DIR/bin:$PATH
# Blender (if not system-installed)
export BLENDER_BIN=$HOME/blender/blender
# Datasets (adjust to your setup)
export ROOT_DIR=/data/datasets/nerf_synthetic/
export OUTPUT_DIR=/data/results/nerf_synthetic/Ready-to-use scripts are already provided in the
script/folder. You do not need to write your own training loops. Simply set the environment variables (ROOT_DIR,OUTPUT_DIR, etc.), choose the script matching your dataset, select which stages to run via flags, and execute. The scripts handle the entire pipeline from ground truth rendering through training to relighting evaluation.
The script/ directory contains end-to-end pipeline scripts. Each one handles a specific dataset type and runs through: ground truth rendering, material estimation, 3DGS training, NeILF BRDF decomposition, supervision training, and relighting evaluation.
| Script | Dataset | Description |
|---|---|---|
create_gt_and_run_training_nerf.bash |
NeRF Synthetic | Lego, chair, drums, etc. Synthetic scenes with known geometry. |
create_gt_and_run_training_COLMAP.bash |
COLMAP / Navi | Real-world objects reconstructed with COLMAP. Supports pre-processed GS outputs. |
create_gt_and_run_training_aria.bash |
Aria | Aria dataset scenes. Expects external 3DGS PLY. |
create_gt_and_run_training_specular.bash |
Specular | Specular/reflective objects. Expects external 3DGS PLY. |
create_gt_and_run_training_stanford_orb.bash |
Stanford ORB | Stanford ORB objects (imported as OBJ in Blender). |
create_gt_and_run_training_LERF_CLEAN.sh |
LERF | LERF dataset scenes. |
navi_dataset_prep.bash |
Navi raw data | Downsamples images, applies masks, runs COLMAP SfM. |
run_gaussian_splatting_colmap.sh |
Any COLMAP | Runs external gaussian-splatting repo on a COLMAP dataset. |
run_gaussian_splatting_nerf.sh |
Any NeRF | Runs external gaussian-splatting repo on NeRF-format data. |
Each main training script (create_gt_and_run_training_*.bash) runs through these stages in order, controlled by flags:
1. --render-gts Blender headless rendering of ground truth material maps
(base color, normals, roughness, metallic, depth)
2. --run-rgb2x RGB2X material prediction from rendered images
--run-marigold Marigold monocular depth estimation
--run-diffusion-renderer Diffusion-based material decomposition
(--srgb-to-linear) Enable sRGB-to-linear conversion for the above
3. --run-base-3dgs Stage 1: Train base 3D Gaussian Splatting (30k iterations)
Produces chkpnt30000.pth and point cloud
4. --run-base-neilf Stage 2: Train NeILF model for BRDF decomposition (40k iterations)
Uses 3DGS checkpoint as initialization
Runs evaluation + relighting with 4 environment maps
5. --run-neilf-supervision Stage 3: NeILF with material supervision losses
Uses RGB2X/diffusion-renderer material maps as supervision
Tests multiple loss weight configurations
6. --run-pure-supervision Variant: pure supervision (no NeILF self-supervision)
7. --run-projected-average Projected average baseline
8. --run-mlp Stage 4: Train MLP-based material model on top of deferred shading
(enabled by default)
Important: Stage ordering matters. Stages 1-2 (rendering + material estimation) must complete before stages 3+ (training) can start, because training relies on the material maps as supervision input. If you run DiffusionRenderer separately (outside the script), make sure its output maps are in the dataset directory before launching training.
Masking warning for external Gaussian Splatting: If you use an external Gaussian Splatting implementation (not the one built into this framework) for the base 3DGS stage, make sure the mask is applied only to the ground truth image and not to the 3DGS rasterization output. Masking the rasterized output will cause the optimizer to push Gaussians into masked regions to minimize the loss, creating floaters (stray Gaussians in empty space). The correct approach is to mask only the GT side of the loss so the 3DGS output is compared against the masked ground truth.
Basic usage (NeRF Synthetic):
source venv/bin/activate
export ROOT_DIR=/data/datasets/nerf_synthetic/
export OUTPUT_DIR=/data/results/nerf_synthetic/
# Run everything from rendering to MLP training
bash script/create_gt_and_run_training_nerf.bash \
--render-gts \
--run-diffusion-renderer \
--run-base-3dgs \
--run-base-neilf \
--run-neilf-supervision \
--run-mlpCOLMAP dataset with pre-processed GS:
export ROOT_DIR=/data/datasets/navi/
export OUTPUT_DIR=/data/results/navi/
bash script/create_gt_and_run_training_COLMAP.bash \
--use-preprocessed-gs \
--gs-ply /path/to/gs_outputs/{scene}/point_cloud/iteration_30000/point_cloud.ply \
--cameras-json /path/to/gs_outputs/{scene}/cameras.json \
--run-base-3dgs \
--run-base-neilf \
--run-mlpThe
{scene}template is automatically expanded to each scene name during iteration.
Navi dataset preparation:
NAVI_RAW_DIR=/path/to/navi/raw/ \
NAVI_DATASET_DIR=/path/to/datasets/navi/ \
bash script/navi_dataset_prep.bashExternal Gaussian Splatting (COLMAP format):
bash script/run_gaussian_splatting_colmap.sh \
--method-dir /path/to/gaussian-splatting \
--data-dir /data/datasets/navi/object_name/base \
--output-root /data/results/navi/object_name \
--iters 30000Each script has a list=() array near the top that controls which scenes to process. Edit this to select scenes:
# In create_gt_and_run_training_nerf.bash:
list=("lego") # single scene
list=("chair" "drums" "ficus" "hotdog") # multiple scenes
list=("lego" "materials" "mic" "ship") # all NeRF syntheticThe models=() array selects which material estimation method's outputs to use for supervision:
models=("diffusionrenderer") # use diffusion-renderer outputs
models=("rgb2x") # use RGB2X outputs
models=("base") # use base (no external supervision)Instead of using the full scripts, you can run each step manually:
Stage 1 -- Base 3DGS:
python train.py --eval \
-s $ROOT_DIR/lego/base/ \
-m $OUTPUT_DIR/lego/3dgs \
--lambda_normal_render_depth 0.01 \
--lambda_normal_smooth 0.01 \
--lambda_mask_entropy 0.1 \
--save_training_visStage 2 -- NeILF (BRDF decomposition):
python train.py --eval \
-s $ROOT_DIR/lego/base/ \
-m $OUTPUT_DIR/lego/neilf \
-c $OUTPUT_DIR/lego/3dgs/chkpnt30000.pth \
-t neilf --sample_num 32 \
--iterations 40000 \
--position_lr_init 0.000016 \
--position_lr_final 0.00000016 \
--lambda_light 0.01 \
--lambda_env_smooth 0.01Stage 3 -- NeILF with Material Supervision:
python train.py --eval \
-s $ROOT_DIR/lego/base/ \
-m $OUTPUT_DIR/lego/neilf_supervised \
-c $OUTPUT_DIR/lego/3dgs/chkpnt30000.pth \
--base_color_folder /train/diffusionrenderer_baseColor/ \
--roughness_folder /train/diffusionrenderer_roughness/ \
--metallic_folder /train/diffusionrenderer_metallic/ \
--lambda_base_color_buffer 1.0 \
--lambda_roughness_buffer 1.0 \
--lambda_metallic_buffer 1.0 \
-t neilf --sample_num 32 \
--iterations 40000Novel View Synthesis:
# Stage 1 (3DGS)
python eval_nvs.py --eval \
-m $OUTPUT_DIR/lego/3dgs \
-c $OUTPUT_DIR/lego/3dgs/chkpnt30000.pth
# Stage 2 (NeILF)
python eval_nvs.py --eval \
-m $OUTPUT_DIR/lego/neilf \
-c $OUTPUT_DIR/lego/neilf/chkpnt40000.pth \
-t neilfAverage Metrics Across Scenes:
RESULTS_DIR=/data/results/navi python evaluation/calculate_average_metrics.py
RESULTS_DIR=/data/results/navi python evaluation/calculate_average_timings.pyObject relighting:
python relighting.py \
-co "$ROOT_DIR/lego/envmap12/" \
--envmap_path "env_map/envmap12.exr" \
--output "$OUTPUT_DIR/lego/neilf/envmap12" \
--ply_path "$OUTPUT_DIR/lego/neilf/point_cloud/iteration_40000/point_cloud.ply" \
--sample_num 64 \
--videoAn example usage of this can be found in the script/ folder in the create_gt_and_... scripts (they call relighting.py with the correct dataset directories and arguments).
Visualize trained models interactively:
# 3D Gaussian Splatting
python gui.py -m $OUTPUT_DIR/lego/3dgs -t render
# Relightable 3D Gaussian (NeILF)
python gui.py -m $OUTPUT_DIR/lego/neilf -t neilfYou can also add --gui to any train.py command to monitor training live.
Run through this to confirm everything works:
source venv/bin/activate
# PyTorch + CUDA
python -c "import torch; assert torch.cuda.is_available(); print('OK: PyTorch', torch.__version__)"
# CUDA extensions
python -c "from simple_knn._C import distCUDA2; print('OK: simple-knn')"
python -c "import bvh; print('OK: bvh')"
python -c "from diff_gaussian_rasterization import GaussianRasterizationSettings; print('OK: r3dg-rasterization')"
python -c "import nvdiffrast.torch; print('OK: nvdiffrast')"
# OptiX renderer
python -c "import sys; sys.path.insert(0, 'Optix/build'); import SampleRenderer; print('OK: SampleRenderer')"
# Python dependencies
python -c "import kornia, lpips, OpenEXR, pyexr, skimage; print('OK: all Python deps')"
# Entrypoints
python train.py --help
python eval_nvs.py --help
python relighting.py --helpnvcc --version # CUDA Toolkit
nvidia-smi # driver CUDA version
python -c "import torch; print(torch.version.cuda)" # PyTorch CUDA versionAll must be compatible (e.g., driver supports >= 11.8, toolkit is 11.8, PyTorch was built for 11.8).
Ensure OptiX_INSTALL_DIR is set and valid:
echo $OptiX_INSTALL_DIR
ls $OptiX_INSTALL_DIR/include/optix.hEnsure SLANG_DIR is set:
echo $SLANG_DIR
ls $SLANG_DIR/bin/slangc- Check
CUDA_HOME:echo $CUDA_HOME && ls $CUDA_HOME/bin/nvcc - GCC compatibility: CUDA 11.8 supports GCC up to 11
- Explicit arch list:
export TORCH_CUDA_ARCH_LIST="7.0;7.5;8.0;8.6"
export PYOPENGL_PLATFORM=egl
export CYCLES_CUDA_EXTRA_CFLAGS="-I${CUDA_HOME:-/usr/local/cuda}/include"
$BLENDER_BIN --version- Check the
.soexists:ls Optix/build/SampleRenderer*.so - Ensure
LD_LIBRARY_PATHincludes$CUDA_HOME/lib64andOptix/build/ - Rebuild:
cd Optix/build && make -j && cd ../..
pip install torch_scatter -f https://data.pyg.org/whl/torch-$(python -c "import torch; print(torch.__version__.split('+')[0])")+cu118.htmlIf you find this work useful, please cite:
@inproceeding{
matspray,
author =
{Langsteiner, Philipp and Dihlmann, Jan-Niklas and Lensch, Hendrik P.A.},
title =
{MatSpray: Fusing 2D Material World Knowledge on 3D Geometry},
booktitle =
{Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month =
{June},
year =
{2026}
}This codebase builds on several excellent projects:
- Relightable 3D Gaussian (Gao et al., ECCV 2024) -- the core framework for relightable Gaussian Splatting that this code is based on.
- OptiX 7 Course (Wald, Siggraph 2019/2020) -- the foundation for the OptiX ray-tracing renderer in the
Optix/directory. - SSS GS (Dihlmann et al., NeurIPS 2024) -- parts of our deferred rendering pipeline are adapted from their subsurface scattering Gaussian Splatting work.
- DiffusionRenderer (Liang, Wang et al., CVPR 2025 Oral) -- the SVD-based material predictor used to generate per-view PBR material maps for supervision.
@inproceedings{R3DG2024,
author = {Gao, Jian and Gu, Chun and Lin, Youtian and Li, Zhihao and Zhu, Hao and Cao, Xun and Zhang, Li and Yao, Yao},
title = {Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024},
}
@inproceedings{sss_gs,
author = {Dihlmann, Jan-Niklas and Engelhardt, Andreas and Majumdar, Arjun and Braun, Raphael and Lensch, Hendrik P.A.},
title = {Subsurface Scattering for Gaussian Splatting},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2024},
}
@inproceedings{DiffusionRenderer,
author = {Ruofan Liang and Zan Gojcic and Huan Ling and Jacob Munkberg and
Jon Hasselgren and Zhi-Hao Lin and Jun Gao and Alexander Keller and
Nandita Vijaykumar and Sanja Fidler and Zian Wang},
title = {DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025},
}