This repository follows the integration guidelines described here for custom methods within Nerfstudio.
Follow these instructions to install Nerfstudio.
Navigate to the nerfstudio directory and run
git clone https://github.com/r4dl/nerfinternals.git
The folder structure should now look like the following:
nerfstudio
├── ...
├── nerfinternals
│   ├── nerfinternals
│   ├── outputs
│   ├── scripts
│   ├── pyproject.toml
│   └── README.md
├── nerfstudio
│   ├── data
│   │   ├── blender
│   │   │   ├── chair
│   │   │   └── ...
│   │   ├── nerf_llff_data
│   │   │   ├── fern
│   │   │   └── ...
│   │   └── ...
│   └── ...
└── ...
Note that the nerfstudio/outputs directory is not created by default, but will be created if you train models.
Navigate to the nerfstudio/nerfinternals folder and runpython -m pip install -e .
Note: You should re-activate your environment.
You should see a list of subcommands containing...
╭─ subcommands ────────────────────────────────────────────────────────╮
│ activation-mipnerf    Using Activations to infer Depth, Mip-NeRF.    │
│ activation-nerf       Using Activations to infer Depth, NeRF.        │
│ activation-nerfacto   Using Activations to infer Depth, nerfacto.    │
│ ...                                                                  │
╰──────────────────────────────────────────────────────────────────────╯ You should see the new methods activation-{nerf, mipnerf, nerfacto}
To train a model (just as done in the paper), run:
ns-train activation-{nerf, mipnerf, nerfacto} --data <path_to_data> <other cl-args> <dataparser>
As we need to set a lot of individual command line arguments, we provide scripts in the nerfinternals/scripts/
directory to train models for all scenes of a dataset. 
We provide a helper for each script, you can use ./launch_train_{blender, llff}_{nerf, nerfacto, nerfacto_NDC}.sh -h.
Note that we used the configuration in launch_train_llff_nerfacto.sh for our results in the main paper. 
For this, we used the nerfstudio_data dataparser, hence we need to use ns-process-data to convert the LLFF dataset to the required format.
Run ns-process-data -h for further information about this command. We use the default arguments for images, and we use
the images with a downscale factor of 4.
To evaluate with our approach, use our eval.py script located in nerfinternals/nerfinternals/eval.py. 
Our models expect data in the directory nerfstudio/data/{nerf_llff_data, blender}. 
Example data can be downloaded with ns-download-data. We use the LLFF dataset provided by NeRF-Factory.
Run python nerfinternals/nerfinternals/eval.py -h to see a list of available options:
usage: eval.py [-h] --load-config PATH [--layer INT [INT ...]]
               [--fct INT [INT ...]] [--upsample | --no-upsample]
               [--run-normal | --no-run-normal] [--output-dir STR]
Load a checkpoint, use the activations for estimating the density.
╭─ arguments ────────────────────────────────────────────────────────────────╮
│ -h, --help              show this help message and exit                    │
│ --load-config PATH      Path to config YAML file. (required)               │
│ --layer INT [INT ...]   layer in which to observe the activations - must   │
│                         not be larger than num_layers (default: 0 1 2)     │
│ --fct INT [INT ...]     function to use - must not be larger than 2        │
│                         (default: 0 1 2)                                   │
│ --upsample, --no-upsample                                                  │
│                         whether to upsample or not (default: False)        │
│ --run-normal, --no-run-normal                                              │
│                         whether to run coarse-to-fine pipeline or not      │
│                         (default: True)                                    │
│ --output-dir STR        directory to save outputs in (default: eval)       │
╰────────────────────────────────────────────────────────────────────────────╯As an example command, running from the nerfstudio/nerfinternals directory, you can use
python3 nerfinternals/eval.py --load-config outputs/chair/activation-nerf/2023-04-28_135527/config.yml --layer 0 --fct 0 --no-run-normal
which produces the following images (left NeRF, right Ours).
 

Statistics are given in the the stats.json file (run on a NVIDIA 2070 Super):
  "base": {
    "t": 43.97965955734253,
    "metrics": {
      "psnr": 35.70448684692383,
      "ssim": 0.9865843057632446,
      "lpips": 0.020251736044883728
    }
  },
  "layer_00_ups_0_fct_std": {
    "t": 33.18729019165039,
    "metrics": {
      "psnr": 34.74652099609375,
      "ssim": 0.9822013974189758,
      "lpips": 0.03017939068377018,
      "quantitative": {
        "t_act": 1.0785419940948486,
        "t_coarse": 0.0,
        "t_fine": 32.10867691040039
      }
    }
  }As models are costly to train, especially for NeRF and Mip-NeRF, we provide pre-trained models in outputs.zip, hosted via Google Drive. 
Note that these models can be used with {vanilla-nerf, mipnerf} by re-writing the corresponding config.yml file. 
Create a directory nerfinternals/outputs and paste the models there. 
Our approach achieves the following performance on:
| Blender Dataset | LLFF Dataset | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
 | 
 | 
This project is built on Nerfstudio

Our code was tested with nerfstudio=v0.3.1
and Cuda 11.7.
If you use our work or build on top of it, use the following citation:
@inproceedings(Radl2024NerfInternals,
  title     = {{Analyzing the Internals of Neural Radiance Fields}},
  author    = {Radl, Lukas and Kurz, Andreas and Steiner, Michael and Steinberger, Markus},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year      = {2024},
}