Official Pytorch implementation for the journal article:
@article{mo2022rganet,
title = {Realtime Global Attention Network for Semantic Segmentation},
author = {Mo, Xi and Chen, Xiangyu},
journal = {IEEE Robotics and Automation Letters with ICRA 2022 Presentation},
year = {2022},
month = jan,
volumn = {7},
number = {2},
pages = {1574-1580},
publisher = {IEEE},
doi = {10.1109/LRA.2022.3140443}
}
python >= 3.5
pytorch >= 1.0.0
(optional) thop, apex, tqdm
- Create the folder
checkpoint
in the root directory, download our pretrained checkpoint (53.1MB) to the folder, then run the demo:
python demo.py
Suction area predictions should be saved in the folder
sample
, or specify thepath/to/checkpoint
andpath/to/samples
using-c
and-d
args respectively.
- Prepare dataset and train from scratch
Please consult
utils/configuraion.py
if you want to customize training setup, then download suction-based-grasping-dataset.zip (1.6GB), create the folderdataset
in the root directory. There are two ways to train RGANet-NB:Extract the main folder
suction-based-grasping-dataset
todataset
, run(default) > python RGANet.py -train
Extract the main folder to somewhere else and specify the paths:
(customized) > python RGANet.py -train -i path/to/color-input -l path/to/label
- Restore training from checkpoint
By default, RGANet read the latest checkpoint from the folder
checkpoint
, you can also specify the checkpoint using arg-c
:python RGANet.py -train -r -c path/to/checkpoint
The checkpoint is required before any test, and we only present the RGANet-NB architecture. Please consult
utils/configuraion.py
if you want to customize the testing, then runpython RGANet.py -test
Refer to arg
-d
to specify the path to save predictions.
We provide both online validation and offline valuation. The online validation runs tests on all checkpoints and estimates the checkpoint that may has the best performance. Offline validation requires predictions saved to disk beforehand, i.e., run test w/ predictions written to disk before any offline validation. Please make sure you've set desired options in
utils/configuraion.py
. Run
python RGANet.py -v
orpython RGANet.py -test -v
Refer to arg-d
to specify the path to predictions.
- In addition to the functions provided above, we also provide useful tools:
name | illustration |
---|---|
calculator.py | evaluation statistics, calculate model parameters |
eval_adaptor.py | split items of validation result to separate files |
models.py | standalone training, validation of segmentation models |
pred_transform.py | convert other predictions to processable images |
proportion.py | compute adaptive weights for CE loss and focal loss |
seg_models.py | standalone runtime test for segmentation models |
Apache 2.0