Dear Shizheng Wen,
I hope this message finds you well. First of all, thank you very much for your excellent work on GAOT—I truly appreciate the methodological elegance and strong empirical results presented in your paper.
I recently attempted to reproduce the time-independent benchmarks using the official open-source codebase. Overall, the pipeline was clear and well-documented. However, I noticed a noticeable discrepancy in the Poisson-Gauss case: my reproduced median relative L1 error is approximately 2.03%, whereas the paper reports 0.83%. Given the magnitude of this gap, I’d like to kindly ask whether I might have overlooked any important configuration or training detail.
For reference, on the Elasticity task, I obtained a median relative L1 error of 1.53%, which is fairly close to the reported 1.34% and likely within reasonable variance.
The command I used (for Poisson-Gauss) was:
python main.py --config config/examples/time_indep/poisson_gauss.json
and analogously for elasticity.json, with no modifications to the default settings.
Could you please clarify whether the final results in the paper relied on additional details not included in the public config files? For example: specific random seeds, extended training epochs, learning rate scheduling, data normalization strategies, or other subtle training practices?
Any guidance would be greatly appreciated. Thank you again for your valuable contribution to the field!
Best regards,
LilaKen
Dear Shizheng Wen,
I hope this message finds you well. First of all, thank you very much for your excellent work on GAOT—I truly appreciate the methodological elegance and strong empirical results presented in your paper.
I recently attempted to reproduce the time-independent benchmarks using the official open-source codebase. Overall, the pipeline was clear and well-documented. However, I noticed a noticeable discrepancy in the Poisson-Gauss case: my reproduced median relative L1 error is approximately 2.03%, whereas the paper reports 0.83%. Given the magnitude of this gap, I’d like to kindly ask whether I might have overlooked any important configuration or training detail.
For reference, on the Elasticity task, I obtained a median relative L1 error of 1.53%, which is fairly close to the reported 1.34% and likely within reasonable variance.
The command I used (for Poisson-Gauss) was:
and analogously for elasticity.json, with no modifications to the default settings.
Could you please clarify whether the final results in the paper relied on additional details not included in the public config files? For example: specific random seeds, extended training epochs, learning rate scheduling, data normalization strategies, or other subtle training practices?
Any guidance would be greatly appreciated. Thank you again for your valuable contribution to the field!
Best regards,
LilaKen