Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 34 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
FROM ghcr.io/nerfstudio-project/nerfstudio

# Install system dependencies
RUN apt-get update && \
apt-get install -y git gcc-10 g++-10 nvidia-cuda-toolkit ninja-build python3-pip wget && \
rm -rf /var/lib/apt/lists/*

# Set environment variables to use gcc-10
ENV CC=gcc-10
ENV CXX=g++-10
ENV CUDA_HOME="/usr/local/cuda"
ENV CMAKE_PREFIX_PATH="$(python -c 'import torch; print(torch.utils.cmake_prefix_path)')"
ENV TORCH_CUDA_ARCH_LIST="7.5;8.0;8.6+PTX"

# Clone QED-Splatter and install in editable mode
RUN git clone https://github.com/leggedrobotics/qed-splatter.git /apps/qed-splatter
RUN pip install /apps/qed-splatter

# Register QED-Splatter with nerfstudio
RUN ns-install-cli
RUN mkdir -p /usr/local/cuda/bin && ln -s /usr/bin/nvcc /usr/local/cuda/bin/nvcc

# Install additional Python dependencies
RUN pip install git+https://github.com/rmbrualla/pycolmap@cc7ea4b7301720ac29287dbe450952511b32125e
RUN pip install git+https://github.com/rahul-goel/fused-ssim@1272e21a282342e89537159e4bad508b19b34157
RUN pip install nerfview pyntcloud

# Pre-download AlexNet pretrained weights to prevent runtime downloading
RUN mkdir -p /root/.cache/torch/hub/checkpoints && \
wget https://download.pytorch.org/models/alexnet-owt-7be5be79.pth \
-O /root/.cache/torch/hub/checkpoints/alexnet-owt-7be5be79.pth

# Set working directory
WORKDIR /workspace
118 changes: 118 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,121 @@ To train the new method, use the following command:
```
ns-train qed-splatter --data [PATH]
```

## Pruning Extension

The pruning extension provides tools to reduce the number of Gaussians in order to improve rendering speed. There are two main types of pruners available:

- **Soft pruners** gradually reduce the number of Gaussians during training.
- **Hard pruners** are post-processing tools applied after training is complete.

Each pruner computes a *pruning score* to evaluate the importance of individual Gaussians. The least important Gaussians are then removed.

Currently, two hard pruning scripts are available: `rgb_hard_pruner` and `depth_hard_pruner`.

### RGB_hard_pruner
This pruner uses RGB loss to compute a pruning score to do hard pruning.
```
python3 RGB_hard_pruner.py default --data-dir datasets/park --ckpt results/park/step-000029999.ckpt --pruning-ratio 0.1 --result-dir output

--eval-only (only evaluates, no saving, no pruning)
--pruning-ratio 0.0 (no pruning, saved in new format)
--output-format (ply (default), ckpt (nerfstudio), pt (gsplat))
```

## 📥 Required Arguments

| Argument | Description |
|-------------------|-----------------------------------------------------------------------------|
| `default` | Specifies the run configuration. |
| `--data-dir` | Path to the directory containing `transforms.json` (camera poses and intrinsics) and images (RGB, Depth)· |
| `--ckpt` | Path to the pretrained model checkpoint (e.g., `results/park/step-XXXXX.ckpt`). |
| `--pruning-ratio` | Float between `0.0` and `1.0`. Proportion of the model to prune. Example: `0.1` = keep 90%. |
| `--result-dir` | Directory where the output (pruned model) will be saved. |


## Input Format

The code supports multiple output formats. The format is detected automatically.
- `ply` : expects a Nerfstudio format for the transforms.
- `ckpt` : expects a Nerfstudio format for the transforms.
- `pt` : expects a gsplat format for the transforms.



### GSPlat Dataset Format

This repository expects datasets to be structured in a COLMAP-like format, which includes camera parameters, image poses, and optionally 3D points. This format is commonly used for 3D reconstruction and novel view synthesis tasks.

#### 📁 Folder Structure

Your dataset should be organized like this:
```
data_dir/
├── images/ # All input images
│ ├── img1.jpg
│ ├── img2.png
│ └── ...
├── sparse/ # Sparse reconstruction data (from COLMAP)
│ ├── cameras.bin # Camera intrinsics
│ ├── images.bin # Image poses (extrinsic) and filenames
│ └── points3D.bin # Optional: 3D point cloud
```

### Nerfstudio Dataset Format

#### 🔧 `transforms.json` File

This file must include the following:

- Intrinsic camera parameters:
- `"fl_x"`, `"fl_y"`: focal lengths
- `"cx"`, `"cy"`: principal point
- `"w"`, `"h"`: image dimensions

- A list of frames, each containing:
- `file_path`: path to the RGB image (relative to `your_dataset/`)
- `depth_file_path`: path to the depth map (relative to `your_dataset/`)
- `transform_matrix`: 4x4 camera-to-world matrix

**Example:**
```json
{
"w": 1920,
"h": 1080,
"fl_x": 2198.997802734375,
"fl_y": 2198.997802734375,
"cx": 960.0,
"cy": 540.0,
"k1": 0,
"k2": 0,
"p1": 0,
"p2": 0,
"frames": [
{
"file_path": "images/frame_0000.png",
"depth_file_path": "depths/frame_0000.png",
"transform_matrix": [[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]]
},
{
"file_path": "images/frame_0001.png",
"depth_file_path": "depths/frame_0001.png",
"transform_matrix": [[0,2,0,0], [0,1,3,0], [0,0,1,0], [0,5,0,1]]
}
]
}
```


### Depth_hard_pruner
This pruner uses depth loss to compute a pruning score to do hard pruning. It works analogously to the RGB hard pruner but not all features are available.
```
python3 depth_hard_pruner.py default --data-dir datasets/park --ckpt results/park/step-000029999.ckpt --pruning-ratio 0.1 --result-dir output

--eval-only (only evaluates, no saving, no pruning)
--pruning-ratio 0.0 (no pruning, saved in new format)
--output-format (ply (default), ckpt (nerfstudio), pt (gsplat))
```

#### Known Issues
For the Park scene it tries to generate black gaussians to cover the sky. The enitre scene is encased in these gaussians.
Loading