Skip to content

Commit ab8ca64

Browse files
committed
Update readme with DomainTranformer information
Add more information about the domaintransformer object.
1 parent 857f0fe commit ab8ca64

File tree

1 file changed

+13
-11
lines changed

1 file changed

+13
-11
lines changed

README.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ exemplifying the balance between exploration and exploitation and how to
4545
control it.
4646
- Go over this [script](https://github.com/fmfn/BayesianOptimization/blob/master/examples/sklearn_example.py)
4747
for examples of how to tune parameters of Machine Learning models using cross validation and bayesian optimization.
48+
- Explore the [domain reduction notebook](https://github.com/fmfn/BayesianOptimization/blob/master/examples/domain_reduction.ipynb) to learn more about how search can be sped up by dynamically changing parameters' bounds.
4849
- Finally, take a look at this [script](https://github.com/fmfn/BayesianOptimization/blob/master/examples/async_optimization.py)
4950
for ideas on how to implement bayesian optimization in a distributed fashion using this package.
5051

@@ -53,11 +54,11 @@ for ideas on how to implement bayesian optimization in a distributed fashion usi
5354

5455
Bayesian optimization works by constructing a posterior distribution of functions (gaussian process) that best describes the function you want to optimize. As the number of observations grows, the posterior distribution improves, and the algorithm becomes more certain of which regions in parameter space are worth exploring and which are not, as seen in the picture below.
5556

56-
![BayesianOptimization in action](https://github.com/fmfn/BayesianOptimization/blob/master/examples/bo_example.png)
57+
![BayesianOptimization in action](./examples/bo_example.png)
5758

5859
As you iterate over and over, the algorithm balances its needs of exploration and exploitation taking into account what it knows about the target function. At each step a Gaussian Process is fitted to the known samples (points previously explored), and the posterior distribution, combined with a exploration strategy (such as UCB (Upper Confidence Bound), or EI (Expected Improvement)), are used to determine the next point that should be explored (see the gif below).
5960

60-
![BayesianOptimization in action](https://github.com/fmfn/BayesianOptimization/blob/master/examples/bayesian_optimization.gif)
61+
![BayesianOptimization in action](./examples/bayesian_optimization.gif)
6162

6263
This process is designed to minimize the number of steps required to find a combination of parameters that are close to the optimal combination. To do so, this method uses a proxy optimization problem (finding the maximum of the acquisition function) that, albeit still a hard problem, is cheaper (in the computational sense) and common tools can be employed. Therefore Bayesian Optimization is most adequate for situations where sampling the function to be optimized is a very expensive endeavor. See the references for a proper discussion of this method.
6364

@@ -180,6 +181,15 @@ optimizer.maximize(
180181
| 10 | -1.762 | 1.442 | 0.1735 |
181182
=================================================
182183

184+
### 2.2 Sequential Domain Reduction
185+
186+
Sometimes the initial boundaries specified for a problem are too wide, and adding points to improve the response surface in regions of the solution domain is extraneous. Other times the cost function is very expensive to compute, and minimizing the number of calls is extremely beneficial.
187+
188+
When it's worthwhile to converge on an optimal point quickly rather than try to find the optimal point, contracting the domain around the current optimal value as the search progresses can speed up the search progress considerably. Using the `SequentialDomainReductionTransformer` the bounds of the problem can be panned and zoomed dynamically in an attempt to improve convergence.
189+
190+
![sequential domain reduction](./examples/sdr.png)
191+
192+
An example of using the `SequentialDomainReductionTransformer` is shown in the [domain reduction notebook](https://github.com/fmfn/BayesianOptimization/blob/master/examples/domain_reduction.ipynb). More information about this method can be found in the paper ["On the robustness of a simple domain reduction scheme for simulation‐based optimization"](http://www.truegrid.com/srsm_revised.pdf).
183193

184194
## 3. Guiding the optimization
185195

@@ -261,14 +271,6 @@ new_optimizer = BayesianOptimization(
261271
load_logs(new_optimizer, logs=["./logs.json"]);
262272
```
263273

264-
### 4.3 Sequential Domain Reduction
265-
266-
Using the `SequentialDomainReductionTransformer` the bounds of the problem can be panned and zoomed in an attempt to improve convergence. Sometimes the initial boundaries specified for a problem are too wide, and adding points to improve the response surface in regions of the solution domain is extraneous. Other times the cost function is very expensive to compute, and minimizing the number of calls is extremely beneficial.
267-
268-
![sequential domain reduction](https://github.com/fmfn/BayesianOptimization/blob/master/examples/sdr.png)
269-
270-
An example of using the `SequentialDomainReductionTransformer` is shown in the [domain reduction notebook](https://github.com/fmfn/BayesianOptimization/blob/master/examples/domain_reduction.ipynb)
271-
272274
## Next Steps
273275

274276
This introduction covered the most basic functionality of the package. Checkout the [basic-tour](https://github.com/fmfn/BayesianOptimization/blob/master/examples/basic-tour.ipynb) and [advanced-tour](https://github.com/fmfn/BayesianOptimization/blob/master/examples/advanced-tour.ipynb) notebooks in the example folder, where you will find detailed explanations and other more advanced functionality. Also, browse the examples folder for implementation tips and ideas.
@@ -322,4 +324,4 @@ If you used this package in your research and is interested in citing it here's
322324
* http://papers.nips.cc/paper/4522-practical-bayesian-optimization-of-machine-learning-algorithms.pdf
323325
* http://arxiv.org/pdf/1012.2599v1.pdf
324326
* http://www.gaussianprocess.org/gpml/
325-
* https://www.youtube.com/watch?v=vz3D36VXefI&index=10&list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6
327+
* https://www.youtube.com/watch?v=vz3D36VXefI&index=10&list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6

0 commit comments

Comments
 (0)