We provide a unified evaluation script that runs baselines on multiple benchmarks. It takes a baseline model and evaluation configurations, evaluates on-the-fly, and reports results instantly in a JSON file.
Donwload the processed datasets from Huggingface Datasets and put them in your $DATAROOT directory, using huggingface-cli:
export DATAROOT=$HOME/data/eval
huggingface-cli download lpiccinelli/unik3d-evaluation --repo-type dataset --local-dir $DATAROOT --local-dir-use-symlinks FalseSee configs/eval/vitl.json for an example of evaluation configurations on all benchmarks. You can modify "data/val_datasets" to modify the testing dataset list.
Run the script scripts/eval.py:
# Evaluate UniK3D on the 13 benchmarks
python scripts/eval.py --dataroot $DATAROOT --config configs/eval/vits.json --save-path ./unik3d.json --camera-gtWith arguments:
Usage: eval.py [OPTIONS]
Evaluation script.
Options:
--config-path PATH Path to the evaluation configurations.
--dataroot PATH Path to the where the hdf5 datasets are stored
--save-path PATH Path to the output json file.
--camera-gt Use camera-gt during evaluation.