Skip to content

Commit 2277ee0

Browse files
Update README.
Update README.
1 parent ac1473d commit 2277ee0

File tree

1 file changed

+64
-6
lines changed

1 file changed

+64
-6
lines changed

README.md

Lines changed: 64 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,75 @@
1-
# Incremental Learning
1+
# Incremental Learners for Continual Learning
22

3-
*Also called lifelong learning, or continual learning.*
3+
Repository storing all my public works done during my PhD thesis (2019-).
44

5-
This repository will store all my implementations of Incremental Learning's papers.
5+
You will find in there both known implementation (iCaRL, etc.) but also all my papers.
6+
You can find the list of the latter on my [Google Scholar](https://scholar.google.com/citations?user=snwgZBIAAAAJ&hl=en).
67

78
## Structures
89

910
Every model must inherit `inclearn.models.base.IncrementalLearner`.
1011

11-
## Papers implemented:
12+
## Small Task Incremental Learning
1213

14+
Under review, preprint on arXiv [here](https://arxiv.org/abs/2004.13513).
1315

16+
If you use this paper/code in your research, please consider citing us:
1417

15-
## TODO
18+
```
19+
@inproceedings{douillard2020podnet,
20+
title={Small-Task Incremental Learning},
21+
author={Arthur Douillard and Matthieu Cord and Charles Ollion and Thomas Robert and Eduardo Valle},
22+
booktitle={arXiv preprint library},
23+
year={2020}
24+
}
25+
```
1626

17-
- [ ] Add subparser per paper
27+
To run experiments on CIFAR100 with three different class orders, with the challenging
28+
setting of 50 steps:
29+
30+
```bash
31+
python3 -minclearn --options options/podnet/podnet_cnn_cifar100.yaml options/data/cifar100_3orders.yaml \
32+
--initial-increment 50 --increment 1 --fixed-memory \
33+
--device <GPU_ID> --label podnet_cnn_cifar100_50steps \
34+
--data-path <PATH/TO/DATA>
35+
```
36+
37+
Likewise, for ImageNet100:
38+
39+
```bash
40+
python3 -minclearn --options options/podnet/podnet_cnn_imagenet100.yaml options/data/imagenet100_1order.yaml \
41+
--initial-increment 50 --increment 1 --fixed-memory \
42+
--device <GPU_ID> --label podnet_cnn_imagenet100_50steps \
43+
--data-path <PATH/TO/DATA>
44+
```
45+
46+
And ImageNet1000:
47+
48+
Likewise, for ImageNet100:
49+
50+
```bash
51+
python3 -minclearn --options options/podnet/podnet_cnn_imagenet100.yaml options/data/imagenet1000_1order.yaml \
52+
--initial-increment 500 --increment 50 --fixed-memory --memory-size 20000 \
53+
--device <GPU_ID> --label podnet_cnn_imagenet1000_10steps \
54+
--data-path <PATH/TO/DATA>
55+
```
56+
57+
Furthermore several options files are available to reproduce the ablations showcased
58+
in the paper. Please see the directory `./options/podnet/ablations/`.
59+
60+
## Insight From the Future for Continual Learning
61+
62+
Under review, preprint on arXiv [here](https://arxiv.org/abs/2006.13748).
63+
64+
If you use this paper/code in your research, please consider citing us:
65+
66+
```
67+
@inproceedings{douillard2020ghost,
68+
title={Insight From the Future for Continual Learning},
69+
author={Arthur Douillard and Eduardo Valle and Charles Ollion and Thomas Robert and Matthieu Cord},
70+
booktitle={arXiv preprint library},
71+
year={2020}
72+
}
73+
```
74+
75+
The code is still very dirty, I'll clean it later. Forgive me.

0 commit comments

Comments
 (0)