Replies: 2 comments 1 reply
-
Please write a new one by yourself. I can add the link to the repo or somewhere. |
Beta Was this translation helpful? Give feedback.
1 reply
-
I have no plan to add any new feature to the Helper add-on. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
The purpose of this discussion is for me to link to issues/PRs that are related to FSRS and where I have something to say and don't want Jarrett to forget about it/miss my comment.
VERY IMPORTANT
Please make a benchmark repo for benchmarking predicting answer times, aka review times. Just reuse parameters for each user that you already have in the srs-benchmark repo, run FSRS on each user, make it predict answer time using the same formulas as in CMRR/simulator
And then calculate the mean absolute error and RMSE of predicted answer times and real answer times. This way we can test changes that could make CMRR/simulator more accurate.
Leech detector: Expt/leech-detector fsrs4anki-helper#539 (comment)Not plannedNeural D: implement FSRS5D in other.py srs-benchmark#199 (comment). I'll see if I can figure out how to make a combined revlog of several users on my own.Not plannedI want to collect data from people who report true retention < desired retention. Jarrett already has this: https://forms.gle/KaojsBbhMCytaA7h8. I suggest a few changes: add a "Your desired retention" field and "How long ago did you switch to FSRS?" field, remove the "you can follow the latest progress at the GitHub repository or official Anki Forums" text (and honestly, rewrite that entire intro), and add this to instructions

Increase minimum DR in Anki from 70% to 75% in anticipation of FSRS-6 with a flatter curve: Changed minimum value of DR ankitects/anki#3898 (comment)Not plannedMaybe add a way to do automatic optimization via the Helper add-on? I hope that's possible.Not plannedFor the new simulation of learning steps, don't forget to apply this smoothing, just like to all other values used for simulations: [Feature request] Further improving the estimation of reviews times for calculating minimum recommended retention fsrs-optimizer#116DoneOptimizable decay for FSRS-6: Feat/FSRS-6 fsrs-optimizer#169 (comment)DoneI'm worried about w11, it seems like it barely changes. According to this graph, all values are close to the default value, which is close to 2. You could say "Well, maybe that's just a really good default value and this distribution is just very narrow", but I'm not so sure.
For example, I'm testing a new D function, and I changed the default value of w11 to 10, and I get this (based on 132 users so far):
5th percentile of w[11]=9.790
95th percentile of w[11]=10.020
This means that this parameter barely changes. Maybe it's just that my implementation is flawed, but I suggest you do some tests:
If the distribution of values of w11 ALWAYS ends up being very narrow and centered around the default value, even if you change the default value by at least a factor of 2, then we have a problem
The idea is to use LSTM as a teacher for FSRS since we know that LSTM is more accurate than FSRS. Then you can vary the hyperparameter to control how much FSRS is penalized for not predicting the value of R that LSTM predicted
Hopefully we can get good default parameters that way. I'm busy running all versions of FSRS and Alex is also busy, so I hope you will do this
I suspect this will perform better than the current approach
Beta Was this translation helpful? Give feedback.
All reactions