|
29 | 29 | ```
|
30 | 30 |
|
31 | 31 | For this function, we know that the sparsity pattern of the Jacobian is a
|
32 |
| -`Tridiagonal` matrix. We represent our sparsity by that matrix: |
| 32 | +`Tridiagonal` matrix. However, if we didn't know the sparsity pattern for |
| 33 | +the Jacobian, we could use the `sparsity!` function to automatically |
| 34 | +detect the sparsity pattern. We declare that it outputs a length 30 vector |
| 35 | +and takes in a length 30 vector, and it spits out a `Sparsity` object |
| 36 | +which we can turn into a `SparseMatrixCSC`: |
33 | 37 |
|
34 | 38 | ```julia
|
35 |
| -sparsity_pattern = Tridiagonal(ones(29),ones(30),ones(29)) |
| 39 | +sparsity_pattern = sparsity!(f,output,input) |
| 40 | +jac = Float64.(sparse(sparsity_pattern)) |
36 | 41 | ```
|
37 | 42 |
|
38 | 43 | Now we call `matrix_colors` to get the color vector for that matrix:
|
39 | 44 |
|
40 | 45 | ```julia
|
41 |
| -colors = matrix_colors(sparsity_pattern) |
| 46 | +colors = matrix_colors(jac) |
42 | 47 | ```
|
43 | 48 |
|
44 | 49 | Since `maximum(colors)` is 3, this means that finite differencing can now
|
45 | 50 | compute the Jacobian in just 4 `f`-evaluations:
|
46 | 51 |
|
47 | 52 | ```julia
|
48 |
| -J = DiffEqDiffTools.finite_difference_jacobian(f, rand(30), color=colors) |
| 53 | +DiffEqDiffTools.finite_difference_jacobian!(jac, f, rand(30), color=colors) |
49 | 54 | @show fcalls # 4
|
50 | 55 | ```
|
51 | 56 |
|
52 | 57 | In addition, a faster forward-mode autodiff call can be utilized as well:
|
53 | 58 |
|
54 | 59 | ```julia
|
55 |
| -forwarddiff_color_jacobian!(sparsity_pattern, f, x, color = colors) |
| 60 | +forwarddiff_color_jacobian!(jac, f, x, color = colors) |
56 | 61 | ```
|
57 | 62 |
|
58 | 63 | If one only need to compute products, one can use the operators. For example,
|
@@ -83,6 +88,36 @@ gmres!(res,J,v)
|
83 | 88 |
|
84 | 89 | ## Documentation
|
85 | 90 |
|
| 91 | +### Automated Sparsity Detection |
| 92 | + |
| 93 | +Automated sparsity detection is provided by the `sparsity!` function whose |
| 94 | +syntax is: |
| 95 | + |
| 96 | +```julia |
| 97 | +`sparsity!(f, Y, X, args...; sparsity=Sparsity(length(X), length(Y)), verbose=true)` |
| 98 | +``` |
| 99 | + |
| 100 | +The arguments are: |
| 101 | + |
| 102 | +- `f`: the function |
| 103 | +- `Y`: the output array |
| 104 | +- `X`: the input array |
| 105 | +- `args`: trailing arguments to `f`. They are considered subject to change, unless wrapped as `Fixed(arg)` |
| 106 | +- `S`: (optional) the sparsity pattern |
| 107 | +- `verbose`: (optional) whether to describe the paths taken by the sparsity detection. |
| 108 | + |
| 109 | +The function `f` is assumed to take arguments of the form `f(dx,x,args...)`. |
| 110 | +`sparsity!` returns a `Sparsity` object which describes where the non-zeros |
| 111 | +of the Jacobian occur. `sparse(::Sparsity)` transforms the pattern into |
| 112 | +a sparse matrix. |
| 113 | + |
| 114 | +This function utilizes non-standard interpretation, which we denote |
| 115 | +combinatoric concolic analysis, to directly realize the sparsity pattern from the program's AST. It requires that the function `f` is a Julia function. It does not |
| 116 | +work numerically, meaning that it is not prone to floating point error or |
| 117 | +cancelation. It allows for branching and will automatically check all of the |
| 118 | +branches. However, a while loop of indeterminate length which is dependent |
| 119 | +on the input argument is not allowed. |
| 120 | + |
86 | 121 | ### Matrix Coloring
|
87 | 122 |
|
88 | 123 | Matrix coloring allows you to reduce the number of times finite differencing
|
|
0 commit comments