Open
Conversation
add orl stage1
keep my local branch in line with the lastest branch
Author
|
I have tried my best to transplant ORL to mmselfsup 1.x. Unfortunately, the pre-training of ORL is multi-stage and it needs thousands of epochs on COCO train2017 dataset, hence the provided computation resource, i.e. the Beijing cloud, is insufficient. As a consequence, I hardly have the time and resources to reproduce downstream task results before DDL (1.16) |
Author
|
update downstream task results (VOC07) in orl/README.md |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Add ORL algorithm, including Python files and README.md
Modification
Please briefly describe what modification is made in this PR.
ORL is composed of three stages, which can be concluded as stage 1: Image-level pre-training and knn image ids retrieve, stage 2: Roi_generate, stage 2: Roi_pair_retrieve and stage 3: object-level pre-training
[1] For dataset-related issues,
a. Regist 'SSDataset', 'CorrespondDataset', 'ORLDataset' in mmselfsup/dataset/init.py
b. ORL involves COCO train2017 dataset in stage 1 and stage 3. Add configs/selfsup/base/dataset/coco_orl_stage1.py, configs/base/dataset/coco_orl_stage3.py
[2] For Algorithm-related issues,
a. Regist "ORL, Correspondence, SelectiveSearch" in mmselfsup/models/algorithms/init.py
b. Add configs/selfsup/base/models/orl.py.
[3] For Hook issues,
Regist "ORLHook" in mmselfsup/engine/hook/init.py. ORLHook is adopted in stage 1 to retrieve knn image ids of COCO train2017
[4] For Bash Scripts,
a. ORL has three stages, therefore tools/slurm_train.sh and tools/dist_train.sh are not enough.
For stage 2: Roi_generate and stage 2: Roi_pair_retrieve, add four shell scripts, i.e.
tools/dist_selective_search_single_gpu.sh, tools/slurm_selective_search_single_gpu.sh,
tools/dist_generate_correspondence_single_gpu.sh, tools/slurm_generate_correspondence_single_gpu.sh,
b. For stage2: Roi_generate and stage2: Roi_pair_retrieve, add tools/generate_correspondence.py and tools/selective_search.py
[5] For pre-train configure file
Add configure files under configs/selfsup/orl/stage1/.py, configs/selfsup/orl/stage2/.py and configs/selfsup/orl/stage3/*.py
[6] Add README.md under configs/selfsup/orl/
BC-breaking (Optional)
Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
Checklist
Before PR:
After PR: