Skip to content

Commit e2b7cea

Browse files
Merge pull request #985 from SciML/julia-formatter-update
Update code formatting with JuliaFormatter
2 parents 3098f2b + c321eee commit e2b7cea

File tree

23 files changed

+442
-373
lines changed

23 files changed

+442
-373
lines changed

docs/pages.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ pages = ["index.md",
2424
"API/modelingtoolkit.md",
2525
"API/FAQ.md"
2626
],
27-
"Optimizer Packages" => [
27+
"Optimizer Packages" => [
2828
"BlackBoxOptim.jl" => "optimization_packages/blackboxoptim.md",
2929
"CMAEvolutionStrategy.jl" => "optimization_packages/cmaevolutionstrategy.md",
3030
"Evolutionary.jl" => "optimization_packages/evolutionary.md",

docs/src/getting_started.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Tada! That's how you do it. Now let's dive in a little more into what each part
3737

3838
## Understanding the Solution Object
3939

40-
The solution object is a `SciMLBase.AbstractNoTimeSolution`, and thus it follows the
40+
The solution object is a `SciMLBase.AbstractNoTimeSolution`, and thus it follows the
4141
[SciMLBase Solution Interface for non-timeseries objects](https://docs.sciml.ai/SciMLBase/stable/interfaces/Solutions/) and is documented at the [solution type page](@ref solution).
4242
However, for simplicity let's show a bit of it in action.
4343

@@ -61,13 +61,13 @@ rosenbrock(sol.u, p)
6161
sol.objective
6262
```
6363

64-
The `sol.retcode` gives us more information about the solution process.
64+
The `sol.retcode` gives us more information about the solution process.
6565

6666
```@example intro
6767
sol.retcode
6868
```
6969

70-
Here it says `ReturnCode.Success` which means that the solutuion successfully solved. We can learn more about the different return codes at
70+
Here it says `ReturnCode.Success` which means that the solutuion successfully solved. We can learn more about the different return codes at
7171
[the ReturnCode part of the SciMLBase documentation](https://docs.sciml.ai/SciMLBase/stable/interfaces/Solutions/#retcodes).
7272

7373
If we are interested about some of the statistics of the solving process, for example to help choose a better solver, we can investigate the `sol.stats`

docs/src/optimization_packages/pycma.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,9 @@ sol = solve(prob, PyCMAOpt())
3131

3232
## Passing solver-specific options
3333

34-
Any keyword that `Optimization.jl` does not interpret is forwarded directly to PyCMA.
34+
Any keyword that `Optimization.jl` does not interpret is forwarded directly to PyCMA.
3535

36-
In the event an `Optimization.jl` keyword overlaps with a `PyCMA` keyword, the `Optimization.jl` keyword takes precedence.
36+
In the event an `Optimization.jl` keyword overlaps with a `PyCMA` keyword, the `Optimization.jl` keyword takes precedence.
3737

3838
An exhaustive list of keyword arguments can be found by running the following python script:
3939

@@ -44,6 +44,7 @@ print(options)
4444
```
4545

4646
An example passing the `PyCMA` keywords "verbose" and "seed":
47+
4748
```julia
4849
sol = solve(prob, PyCMA(), verbose = -9, seed = 42)
4950
```
@@ -54,10 +55,9 @@ The original Python result object is attached to the solution in the `original`
5455

5556
```julia
5657
sol = solve(prob, PyCMAOpt())
57-
println(sol.original)
58+
println(sol.original)
5859
```
5960

6061
## Contributing
6162

62-
Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.
63-
63+
Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.

docs/src/optimization_packages/scipy.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
[`SciPy`](https://scipy.org/) is a mature Python library that offers a rich family of optimization, root–finding and linear‐programming algorithms. `OptimizationSciPy.jl` gives access to these routines through the unified `Optimization.jl` interface just like any native Julia optimizer.
44

55
!!! note
6+
67
`OptimizationSciPy.jl` relies on [`PythonCall`](https://github.com/cjdoris/PythonCall.jl). A minimal Python distribution containing SciPy will be installed automatically on first use, so no manual Python set-up is required.
78

89
## Installation: OptimizationSciPy.jl
@@ -20,37 +21,37 @@ Below is a catalogue of the solver families exposed by `OptimizationSciPy.jl` to
2021

2122
#### Derivative-Free
2223

23-
* `ScipyNelderMead()` – Simplex Nelder–Mead algorithm
24-
* `ScipyPowell()` – Powell search along conjugate directions
25-
* `ScipyCOBYLA()` – Linear approximation of constraints (supports nonlinear constraints)
24+
- `ScipyNelderMead()` – Simplex Nelder–Mead algorithm
25+
- `ScipyPowell()` – Powell search along conjugate directions
26+
- `ScipyCOBYLA()` – Linear approximation of constraints (supports nonlinear constraints)
2627

2728
#### Gradient-Based
2829

29-
* `ScipyCG()` – Non-linear conjugate gradient
30-
* `ScipyBFGS()` – Quasi-Newton BFGS
31-
* `ScipyLBFGSB()` – Limited-memory BFGS with simple bounds
32-
* `ScipyNewtonCG()` – Newton-conjugate gradient (requires Hessian-vector products)
33-
* `ScipyTNC()` – Truncated Newton with bounds
34-
* `ScipySLSQP()` – Sequential least-squares programming (supports constraints)
35-
* `ScipyTrustConstr()` – Trust-region method for non-linear constraints
30+
- `ScipyCG()` – Non-linear conjugate gradient
31+
- `ScipyBFGS()` – Quasi-Newton BFGS
32+
- `ScipyLBFGSB()` – Limited-memory BFGS with simple bounds
33+
- `ScipyNewtonCG()` – Newton-conjugate gradient (requires Hessian-vector products)
34+
- `ScipyTNC()` – Truncated Newton with bounds
35+
- `ScipySLSQP()` – Sequential least-squares programming (supports constraints)
36+
- `ScipyTrustConstr()` – Trust-region method for non-linear constraints
3637

3738
#### Hessian–Based / Trust-Region
3839

39-
* `ScipyDogleg()`, `ScipyTrustNCG()`, `ScipyTrustKrylov()`, `ScipyTrustExact()` – Trust-region algorithms that optionally use or build Hessian information
40+
- `ScipyDogleg()`, `ScipyTrustNCG()`, `ScipyTrustKrylov()`, `ScipyTrustExact()` – Trust-region algorithms that optionally use or build Hessian information
4041

4142
### Global Optimizer
4243

43-
* `ScipyDifferentialEvolution()` – Differential evolution (requires bounds)
44-
* `ScipyBasinhopping()` – Basin-hopping with local search
45-
* `ScipyDualAnnealing()` – Dual annealing simulated annealing
46-
* `ScipyShgo()` – Simplicial homology global optimisation (supports constraints)
47-
* `ScipyDirect()` – Deterministic `DIRECT` algorithm (requires bounds)
48-
* `ScipyBrute()` – Brute-force grid search (requires bounds)
44+
- `ScipyDifferentialEvolution()` – Differential evolution (requires bounds)
45+
- `ScipyBasinhopping()` – Basin-hopping with local search
46+
- `ScipyDualAnnealing()` – Dual annealing simulated annealing
47+
- `ScipyShgo()` – Simplicial homology global optimisation (supports constraints)
48+
- `ScipyDirect()` – Deterministic `DIRECT` algorithm (requires bounds)
49+
- `ScipyBrute()` – Brute-force grid search (requires bounds)
4950

5051
### Linear & Mixed-Integer Programming
5152

52-
* `ScipyLinprog("highs")` – LP solvers from the HiGHS project and legacy interior-point/simplex methods
53-
* `ScipyMilp()` – Mixed-integer linear programming via HiGHS branch-and-bound
53+
- `ScipyLinprog("highs")` – LP solvers from the HiGHS project and legacy interior-point/simplex methods
54+
- `ScipyMilp()` – Mixed-integer linear programming via HiGHS branch-and-bound
5455

5556
### Root Finding & Non-Linear Least Squares *(experimental)*
5657

@@ -65,9 +66,9 @@ using Optimization, OptimizationSciPy
6566
6667
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
6768
x0 = zeros(2)
68-
p = [1.0, 100.0]
69+
p = [1.0, 100.0]
6970
70-
f = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
71+
f = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
7172
prob = OptimizationProblem(f, x0, p)
7273
7374
sol = solve(prob, ScipyBFGS())
@@ -85,7 +86,7 @@ obj(x, p) = (x[1] + x[2] - 1)^2
8586
# Single non-linear constraint: x₁² + x₂² ≈ 1 (with small tolerance)
8687
cons(res, x, p) = (res .= [x[1]^2 + x[2]^2 - 1.0])
8788
88-
x0 = [0.5, 0.5]
89+
x0 = [0.5, 0.5]
8990
prob = OptimizationProblem(
9091
OptimizationFunction(obj; cons = cons),
9192
x0, nothing, lcons = [-1e-6], ucons = [1e-6]) # Small tolerance instead of exact equality
@@ -129,5 +130,4 @@ If SciPy raises an error it is re-thrown as a Julia `ErrorException` carrying th
129130

130131
## Contributing
131132

132-
Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.
133-
133+
Bug reports and feature requests are welcome in the [Optimization.jl](https://github.com/SciML/Optimization.jl) issue tracker. Pull requests that improve either the Julia wrapper or the documentation are highly appreciated.

docs/src/tutorials/reusage_interface.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
# Optimization Problem Reusage and Caching Interface
22

3-
43
## Reusing Optimization Caches with `reinit!`
54

65
The `reinit!` function allows you to efficiently reuse an existing optimization cache with new parameters or initial values. This is particularly useful when solving similar optimization problems repeatedly with different parameter values, as it avoids the overhead of creating a new cache from scratch.
@@ -30,12 +29,12 @@ sol2 = Optimization.solve!(cache)
3029

3130
The `reinit!` function supports updating various fields of the optimization cache:
3231

33-
- `u0`: New initial values for the optimization variables
34-
- `p`: New parameter values
35-
- `lb`: New lower bounds (if applicable)
36-
- `ub`: New upper bounds (if applicable)
37-
- `lcons`: New lower bounds for constraints (if applicable)
38-
- `ucons`: New upper bounds for constraints (if applicable)
32+
- `u0`: New initial values for the optimization variables
33+
- `p`: New parameter values
34+
- `lb`: New lower bounds (if applicable)
35+
- `ub`: New upper bounds (if applicable)
36+
- `lcons`: New lower bounds for constraints (if applicable)
37+
- `ucons`: New upper bounds for constraints (if applicable)
3938

4039
### Example: Parameter Sweep
4140

@@ -75,12 +74,13 @@ end
7574
### Performance Benefits
7675

7776
Using `reinit!` is more efficient than creating a new problem and cache for each parameter value, especially when:
78-
- The optimization algorithm maintains internal state that can be reused
79-
- The problem structure remains the same (only parameter values change)
77+
78+
- The optimization algorithm maintains internal state that can be reused
79+
- The problem structure remains the same (only parameter values change)
8080

8181
### Notes
8282

83-
- The `reinit!` function modifies the cache in-place and returns it for convenience
84-
- Not all fields need to be specified; only provide the ones you want to update
85-
- The function is particularly useful in iterative algorithms, parameter estimation, and when solving families of related optimization problems
86-
- For creating a new problem with different parameters (rather than modifying a cache), use `remake` on the `OptimizationProblem` instead
83+
- The `reinit!` function modifies the cache in-place and returns it for convenience
84+
- Not all fields need to be specified; only provide the ones you want to update
85+
- The function is particularly useful in iterative algorithms, parameter estimation, and when solving families of related optimization problems
86+
- For creating a new problem with different parameters (rather than modifying a cache), use `remake` on the `OptimizationProblem` instead

lib/OptimizationBBO/src/OptimizationBBO.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -30,12 +30,12 @@ function decompose_trace(opt::BlackBoxOptim.OptRunController, progress)
3030
if iszero(max_time)
3131
# we stop at either convergence or max_steps
3232
n_steps = BlackBoxOptim.num_steps(opt)
33-
Base.@logmsg(Base.LogLevel(-1), msg, progress=n_steps / maxiters,
33+
Base.@logmsg(Base.LogLevel(-1), msg, progress=n_steps/maxiters,
3434
_id=:OptimizationBBO)
3535
else
3636
# we stop at either convergence or max_time
3737
elapsed = BlackBoxOptim.elapsed_time(opt)
38-
Base.@logmsg(Base.LogLevel(-1), msg, progress=elapsed / max_time,
38+
Base.@logmsg(Base.LogLevel(-1), msg, progress=elapsed/max_time,
3939
_id=:OptimizationBBO)
4040
end
4141
end

lib/OptimizationGCMAES/src/OptimizationGCMAES.jl

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,13 +106,15 @@ function SciMLBase.__solve(cache::OptimizationCache{
106106

107107
t0 = time()
108108
if cache.sense === Optimization.MaxSense
109-
opt_xmin, opt_fmin, opt_ret = GCMAES.maximize(
109+
opt_xmin, opt_fmin,
110+
opt_ret = GCMAES.maximize(
110111
isnothing(cache.f.grad) ? _loss :
111112
(_loss, g), cache.u0,
112113
cache.solver_args.σ0, cache.lb,
113114
cache.ub; opt_args...)
114115
else
115-
opt_xmin, opt_fmin, opt_ret = GCMAES.minimize(
116+
opt_xmin, opt_fmin,
117+
opt_ret = GCMAES.minimize(
116118
isnothing(cache.f.grad) ? _loss :
117119
(_loss, g), cache.u0,
118120
cache.solver_args.σ0, cache.lb,

lib/OptimizationMOI/src/nlp.jl

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -362,6 +362,7 @@ function MOI.hessian_lagrangian_structure(evaluator::MOIOptimizationNLPEvaluator
362362
# Performance optimization. If both are dense, no need to repeat
363363
else
364364
for col in 1:N, row in 1:col
365+
365366
push!(inds, (row, col))
366367
end
367368
end
@@ -399,6 +400,7 @@ function MOI.eval_hessian_lagrangian(evaluator::MOIOptimizationNLPEvaluator{T},
399400
end
400401
else
401402
for i in 1:size(H, 1), j in 1:i
403+
402404
k += 1
403405
h[k] = σ * H[i, j]
404406
end
@@ -428,6 +430,7 @@ function MOI.eval_hessian_lagrangian(evaluator::MOIOptimizationNLPEvaluator{T},
428430
# `nnz_objective` if the objective is sprase, and `0` otherwise.
429431
k = sparse_objective ? nnz_objective : 0
430432
for i in 1:size(Hi, 1), j in 1:i
433+
431434
k += 1
432435
h[k] += μi * Hi[i, j]
433436
end

lib/OptimizationMetaheuristics/test/runtests.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -208,7 +208,8 @@ Random.seed!(42)
208208
for alg in algs
209209
alg_name = string(typeof(alg))
210210
@testset "$alg_name on $prob_name" begin
211-
multi_obj_fun = MultiObjectiveOptimizationFunction((x, p) -> prob_func(x))
211+
multi_obj_fun = MultiObjectiveOptimizationFunction((
212+
x, p) -> prob_func(x))
212213
prob = OptimizationProblem(multi_obj_fun, lb; lb = lb, ub = ub)
213214
if (alg_name == "Metaheuristics.Algorithm{CCMO{NSGA2}}")
214215
sol = solve(prob, alg)

lib/OptimizationNLopt/test/runtests.jl

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -106,20 +106,20 @@ using Test, Random
106106
x0_test = zeros(2)
107107
optprob = OptimizationFunction(rosenbrock, Optimization.AutoZygote())
108108
prob = OptimizationProblem(optprob, x0_test, _p)
109-
109+
110110
# Test with NLopt.Opt interface
111111
opt = NLopt.Opt(:LD_MMA, 2)
112112
# This should not throw an error - the PR fixed the UndefVarError
113113
sol = solve(prob, opt, dual_ftol_rel = 1e-6, maxiters = 100)
114114
@test sol.retcode [ReturnCode.Success, ReturnCode.MaxIters]
115-
115+
116116
# Test with direct algorithm interface
117117
sol = solve(prob, NLopt.LD_MMA(), dual_ftol_rel = 1e-5, maxiters = 100)
118118
@test sol.retcode [ReturnCode.Success, ReturnCode.MaxIters]
119-
119+
120120
# Verify it works with other solver options
121-
sol = solve(prob, NLopt.LD_MMA(), dual_ftol_rel = 1e-4, ftol_rel = 1e-6,
122-
xtol_rel = 1e-6, maxiters = 100)
121+
sol = solve(prob, NLopt.LD_MMA(), dual_ftol_rel = 1e-4, ftol_rel = 1e-6,
122+
xtol_rel = 1e-6, maxiters = 100)
123123
@test sol.retcode [ReturnCode.Success, ReturnCode.MaxIters]
124124
end
125125

0 commit comments

Comments
 (0)