Replies: 1 comment 3 replies
-
|
Mostly likely, it's because the time budget is 60s and within that budget, either cv or holdout is used to compared different models. When you retrain the model on the full training data, the model's accuracy is higher because more data are used to fit the model. To confirm it, could you check the logger's info and see whether 'retrain' appears in the log? If not, then flaml did not get a chance to retrain the model with full training data with that time budget. BTW, we'll modify the default behavior of retraining in the next version. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello FLAML team,
I was using FLAML selected estimator (LGBMClassifier) to predict the testing data got 0.75 performance.
Then I was using the same params from the selected estimator to fit LGBMClassifier from lgbm library, and predict on the same testing data with the same metrics, got 0.78 performance
Do you know what may caused the difference?
Beta Was this translation helpful? Give feedback.
All reactions