Skip to content

Wrap gradient_free_optimizers (local) #624

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 23 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
60a7185
initial wrapper over hill_climbing optimizer
gauravmanmode Jul 27, 2025
c101bd8
Merge remote-tracking branch 'upstream/main' into gradient_free_optim…
gauravmanmode Jul 27, 2025
fdc8097
lazy_loading import and add docstrings for class and helper func
gauravmanmode Jul 27, 2025
a37771a
fix path in docs
gauravmanmode Jul 28, 2025
d190062
add base class for common options , add pso optimizer
gauravmanmode Aug 1, 2025
67d9904
grid points can be pytree
gauravmanmode Aug 2, 2025
cfe87f8
Merge branch 'main' into gradient_free_optimizers
gauravmanmode Aug 3, 2025
e5e5175
add test, n_init option
gauravmanmode Aug 4, 2025
23dc21d
add hillclimbing derivatives and docs
gauravmanmode Aug 4, 2025
27d0b1b
add tests
gauravmanmode Aug 5, 2025
73946de
remove pso
gauravmanmode Aug 5, 2025
9948a8b
refactor test_many_algorithms new
gauravmanmode Aug 10, 2025
90694b8
tune gfo algorithms temp to pass tests
gauravmanmode Aug 10, 2025
a1cd6cd
add sphereinternaloptimizationexample with converter
gauravmanmode Aug 10, 2025
c2958fd
pytree tests for gfo
gauravmanmode Aug 10, 2025
17bce2e
run new test_many comment old test_many
gauravmanmode Aug 10, 2025
d341b47
mypy override gfo
gauravmanmode Aug 10, 2025
312ded8
tune gfo to pass tests
gauravmanmode Aug 10, 2025
45f86e6
set accuracy of local from 4 to 3 temp
gauravmanmode Aug 10, 2025
a95a826
rename to gfo_optimizers, rename simplex parameters, add to ignore mypy,
gauravmanmode Aug 12, 2025
deae6c0
pass common_options as first argument to reduce clutter, rename to te…
gauravmanmode Aug 12, 2025
fb3ba70
negate fun value
gauravmanmode Aug 12, 2025
d55d8e0
test_many_algorithms add finite case for binding bounds, precision lo…
gauravmanmode Aug 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .tools/envs/testenv-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ dependencies:
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
Expand Down
3 changes: 2 additions & 1 deletion .tools/envs/testenv-nevergrad.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,12 +33,13 @@ dependencies:
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
- types-jinja2 # dev, tests
- sqlalchemy-stubs # dev, tests
- sphinxcontrib-mermaid # dev, tests, docs
- -e ../../
- bayesian_optimization==1.4.0
- nevergrad
- -e ../../
1 change: 1 addition & 0 deletions .tools/envs/testenv-numpy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ dependencies:
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
- types-jinja2 # dev, tests
Expand Down
1 change: 1 addition & 0 deletions .tools/envs/testenv-others.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ dependencies:
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
Expand Down
1 change: 1 addition & 0 deletions .tools/envs/testenv-pandas.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ dependencies:
- fides==0.7.4 # dev, tests
- kaleido>=1.0 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
- types-jinja2 # dev, tests
Expand Down
3 changes: 2 additions & 1 deletion .tools/envs/testenv-plotly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,11 +33,12 @@ dependencies:
- Py-BOBYQA # dev, tests
- fides==0.7.4 # dev, tests
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- pandas-stubs # dev, tests
- types-cffi # dev, tests
- types-openpyxl # dev, tests
- types-jinja2 # dev, tests
- sqlalchemy-stubs # dev, tests
- sphinxcontrib-mermaid # dev, tests, docs
- -e ../../
- kaleido<0.3
- -e ../../
246 changes: 246 additions & 0 deletions docs/source/algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -4699,6 +4699,252 @@ package. To use it, you need to have
aligned structures and enhancing search performance in rotated coordinate systems. (Default:
`False`)
- **seed**: Seed for the random number generator for reproducibility.

```

## Gradient Free Optimizers

Optimizers from the
[gradient_free_optimizers](https://github.com/SimonBlanke/Gradient-Free-Optimizers?tab=readme-ov-file)
package are available in optimagic. To use it, you need to have
[gradient_free_optimizers](https://pypi.org/project/gradient_free_optimizers) installed.

```{eval-rst}
.. dropdown:: Common options across all optimizers

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOCommonOptions

```

```{eval-rst}
.. dropdown:: gfo_hillclimbing

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_hillclimbing(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_hillclimbing",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOHillClimbing

```

```{eval-rst}
.. dropdown:: gfo_stochastichillclimbing

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_stochastichillclimbing(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_stochastichillclimbing",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOStochasticHillClimbing

```

```{eval-rst}
.. dropdown:: gfo_repulsinghillclimbing

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_repulsinghillclimbing(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_repulsinghillclimbing",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFORepulsingHillClimbing

```

```{eval-rst}
.. dropdown:: gfo_randomrestarthillclimbing

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_randomrestarthillclimbing(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_randomrestarthillclimbing",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFORandomRestartHillClimbing

```

```{eval-rst}
.. dropdown:: gfo_simulatedannealing

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_simulatedannealing(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_simulatedannealing",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOSimulatedAnnealing

```

```{eval-rst}
.. dropdown:: gfo_downhillsimplex

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_downhillsimplex(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_downhillsimplex",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFODownhillSimplex

```

```{eval-rst}
.. dropdown:: gfo_pso

**How to use this algorithm.**

.. code-block:: python

import optimagic as om
om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm=om.algos.gfo_pso(stopping_maxiter=1_000, ...),
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

or using the string interface:

.. code-block:: python

om.minimize(
fun=lambda x: x @ x,
params=[1.0, 2.0, 3.0],
algorithm="gfo_pso",
algo_options={"stopping_maxiter": 1_000, ...},
bounds = om.Bounds(lower = np.array([1,1,1]), upper=np.array([5,5,5]))
)

**Description and available options:**

.. autoclass:: optimagic.optimizers.gradient_free_optimizers.GFOParticleSwarmOptimization

```

## References
Expand Down
1 change: 1 addition & 0 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ dependencies:
- kaleido>=1.0 # dev, tests
- pre-commit>=4 # dev
- bayes_optim # dev, tests
- gradient_free_optimizers # dev, tests
- -e . # dev
# type stubs
- pandas-stubs # dev, tests
Expand Down
2 changes: 2 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -381,5 +381,7 @@ module = [
"iminuit",
"nevergrad",
"yaml",
"gradient_free_optimizers",
"gradient_free_optimizers.optimizers.base_optimizer",
]
ignore_missing_imports = true
Loading
Loading