-
Notifications
You must be signed in to change notification settings - Fork 452
Tutorial addition: Feasibility-driven trust Region Bayesian Optimization #3048
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Commit with the tutorial on Feasibility-Driven trust Region Bayesian Optimization.
Title update
SCBO url update
Title update
|
Hi @paoloascia! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Thanks @paoloascia for the contribution! Overall I think this looks good and about ready to be merged. I'll get back later today with a few suggestions for improvements. |
|
Hi @hvarfner ! Thank you for the comment. I'm looking forward to your feedback :). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good implementation-wise, but there are some changes that would be good to see.
- Please make the notebook more informative: The user should get a sense of what the method does from the markdown cells and plots. Now, they have to go through the code. Since the community is pretty well-informed on what TurBO does by now (and probably even SCBO to some extent), try to focus on what distinguishes FurBO from those two. This may entail some extra work, such as adding an informative plot, some math and maybe even a comparison to SCBO.
- Please use a linter to clean up the code. This includes unused imports, having comments in the right places and using docstrings instead of comments where appropriate.
- Get rid of arguments that are not actually used. Max_cholesky_size and "method": L-BFGS-B are covered by current defaults, so they shouldn't be needed.
- Warnings & try/catches: It's odd to me that GP fitting fails - I really don't think it should with this few observations. Please have a look at this. I'm happy to help here.
| "from gpytorch.likelihoods import GaussianLikelihood\n", | ||
| "from gpytorch.mlls import ExactMarginalLogLikelihood\n", | ||
| "\n", | ||
| "from scipy.stats import invgauss\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused imports
| "from gpytorch.mlls import ExactMarginalLogLikelihood\n", | ||
| "\n", | ||
| "from scipy.stats import invgauss\n", | ||
| "from scipy.stats import ecdf\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused imports
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "class norm_():\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to see the utilities we have in botorch used here. The class Normalize(d=dim, bounds=bounds) should do the trick to normalize the inputs to [0, 1] for the models. Similarly, creating a different class ack that wraps Ackley can be avoided by using the same transforms.
| "from botorch.models.model_list_gp_regression import ModelListGP\n", | ||
| "from torch.quasirandom import SobolEngine\n", | ||
| "\n", | ||
| "class Furbo_state():\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use pythonic class naming (i.e. FurboState). Ideally, use type hints too, so that users understan what is going on.
| "class Furbo_state():\n", | ||
| " '''Class to track optimization status with restart'''\n", | ||
| " # Initialization of the status\n", | ||
| " def __init__(self, \n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank for commenting what each variable does. However, the conventional way to do this would be via a docstring. This is also cleaner, and should be a quick fix.
| "import numpy as np\n", | ||
| "from matplotlib import rc\n", | ||
| "\n", | ||
| "def ensure_tensor(x, tkwargs, dim=0):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't torch.cat(x).to(**tkwargs) enough here?
| " samples = multivariate_circular(x_candidate, radius, n_samples, lb=lb, ub=ub, **tkwargs)\n", | ||
| " \n", | ||
| " # Evaluate samples on the models of the objective -> yy Tensor\n", | ||
| " state.Y_model.eval()\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
posterior() sets the model in eval mode, so these can be removed!
| " samples_yy = posterior.mean.squeeze()\n", | ||
| " \n", | ||
| " # Evaluate samples on the models of the constraints -> yy Tensor\n", | ||
| " state.C_model.eval()\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here
| "metadata": {}, | ||
| "outputs": [], | ||
| "source": [ | ||
| "def multivariate_circular(centre, # Centre of the multivariate distribution\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This to me seems like the secret sauce for the method, right? If so, it could benefit from some additional explanation in the markdown cells!
| "outputs": [], | ||
| "source": [ | ||
| "def stopping_criterion(state, n_iteration):\n", | ||
| " '''Function to evaluate if the maximum number of allowed iterations is reached.'''\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return state.it_counter <= n_iteration
Motivation
Hi! This tutorial explains how to implement the Feasibility-driven trust Region Bayesian Optimization (FuRBO) method presented last September at AutoML2025. We believe it is a relevant contribution, as it builds upon Scalable Constrained BO to enhance the treatment of constraints and accelerate convergence to a feasible region.
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
We did not change the previous code. The tutorial has been tested prior to upload.
Related PRs
No documentation needs to be added.