For more details of PointRCNN, please refer to the original paper or the author git project page.
All the codes are tested in the following environment:
- Linux (tested on Ubuntu 18/16)
- Python 3.6+
- PyTorch 1.0
a. Clone the PointRCNN repository.
git clone --recursive https://github.com/sshaoshuai/PointRCNN.gitIf you forget to add the --recursive parameter, just run the following command to clone the Pointnet2.PyTorch submodule.
git submodule update --init --recursiveb. Install the dependent python libraries like easydict,tqdm, tensorboardX etc.
c. Build and install the pointnet2_lib, iou3d, roipool3d libraries by executing the following command:
sh build_and_install.shd. Install Argoverse API.
Arrange all training logs of Argoverse dataset, inside in a single folder. Copy the address of that directory to cfg.DATA_PATH in yaml file. Also install argoverse API.
CFG.DATA_PATH = '../../data/lidar' # Change this as per your path
You could run the following command to evaluate the pretrained model:
python eval_rcnn.py --cfg_file cfgs/argo_config_sampling_trainfull.yaml --rpn_ckpt rpn_ckpt.pth --rcnn_ckpt rcnn_ckpt.pth --batch_size 1 --eval_mode rcnn
- To evaluate a single checkpoint, run the following command with
--ckptto specify the checkpoint to be evaluated:
python eval_rcnn.py --cfg_file cfgs/default.yaml --ckpt ../output/rpn/ckpt/checkpoint_epoch_200.pth --batch_size 4 --eval_mode rcnn
- To evaluate all the checkpoints of a specific training config file, add the
--eval_allargument, and run the command as follows:
python eval_rcnn.py --cfg_file cfgs/default.yaml --eval_mode rcnn --eval_all
- To generate the results on the test split, please modify the
TEST.SPLIT=TESTand add the--testargument.
Here you could specify a bigger --batch_size for faster inference based on your GPU memory. Note that the --eval_mode argument should be consistent with the --train_mode used in the training process. If you are using --eval_mode=rcnn_offline, then you should use --rcnn_eval_roi_dir and --rcnn_eval_feature_dir to specify the saved features and proposals of the validation set. Please refer to the training section for more details.
- To train the first proposal generation stage of PointRCNN with a single GPU, run the following command:
python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 16 --train_mode rpn --epochs 200
- To use mutiple GPUs for training, simply add the
--mgpusargument as follows:
CUDA_VISIBLE_DEVICES=0,1 python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 16 --train_mode rpn --epochs 200 --mgpus
After training, the checkpoints and training logs will be saved to the corresponding directory according to the name of your configuration file. Such as for the default.yaml, you could find the checkpoints and logs in the following directory:
PointRCNN/output/rpn/default/
which will be used for the training of RCNN stage.
Suppose you have a well-trained RPN model saved at output/rpn/default/ckpt/checkpoint_epoch_200.pth,
python train_rcnn.py --cfg_file cfgs/default.yaml --batch_size 4 --train_mode rcnn --epochs 70 --ckpt_save_interval 2 --rpn_ckpt ../output/rpn/default/ckpt/checkpoint_epoch_200.pth
Using Varun Agarwal BBox library scripts, I have used only two of the modules as I didn't required others. This gives the AP and Recall rate. Current IOU threshold at 0.7.
python eval.py --prediction_path /prediction_dir --label_path /labels_dir --iou 0.6
