Releases: FluxML/Flux.jl
Releases · FluxML/Flux.jl
v0.13.12
Flux v0.13.12
Closed issues:
- Delta neural networks inference (#2129)
- [Bug] Embedding forward pass breaks for onehotbatch with multiple batch dimensions (#2160)
- MethodError: no method matching when training LSTMs even when loss function is working corrently (#2168)
- Type instability with Flux.update! when loss function involves extra arguments (#2175)
Merged pull requests:
- Un-deprecate
track_stats
for InstanceNorm (#2149) (@ToucheSir) - Move
dropout
to NNlib (#2150) (@mcabbott) - Use NNlib's
within_gradient
(#2152) (@mcabbott) - Export
rand32
and friends (#2157) (@mcabbott) - Remove piratical array conversion rule (#2167) (@ToucheSir)
- update: actions node 12 => node 16 (#2173) (@skyleaworlder)
- cuda 4.0 compat (#2177) (@CarloLucibello)
v0.13.11
Flux v0.13.11
Closed issues:
- Deprecate
track_stats=true
forGroupNorm
andInstanceNorm
(#2006) cpu(x)
errors forx isa CuArray{<:CartesianIndex}
(#2116)- Constructing a Chain from a dictionary (#2142)
- Method error when using
Flux.setup
withEmbedding
layer (#2144) - Method Error when using Flux.withgradient (#2148)
Merged pull requests:
- fix cpu(x) for immutable arrays (#2117) (@CarloLucibello)
- Fix two bugs re
setup
(#2145) (@mcabbott) - CompatHelper: bump compat for MLUtils to 0.4, (keep existing compat) (#2147) (@github-actions[bot])
v0.13.10
Flux v0.13.10
Closed issues:
- remove Bors (#1843)
- Only generate and upload coverage for one matrix entry (#1939)
- [Discussion]: Revamped Getting Started guide (#2012)
- Using users provided weight matrix to build LSTM layers (#2130)
Merged pull requests:
- Re-write training docs (#2114) (@mcabbott)
- Move doc sections to "guide" + "reference" (#2115) (@mcabbott)
- Allow ForwardDiff in BatchNorm's track_stats (#2127) (@mcabbott)
- Fix last block in quickstart.md (#2131) (@simonschnake)
- Delete bors.toml (#2133) (@CarloLucibello)
- Docs for
onecold
(#2134) (@nathanielvirgo) - [ISSUE 1939] Update workflow, to only generate coverage for a specific entry (#2136) (@skyleaworlder)
v0.13.9
Flux v0.13.9
Closed issues:
- Iteration over
params(m)
in explicit mode gives no gradient (#2091) Flux.Optimise.update!
updating grads instead of params? (#2121)- Flux.reset! triggers a BoundsError (#2124)
Merged pull requests:
- Remove
train!
from quickstart example (#2110) (@mcabbott) - Re-organise "built-in layers" section (#2112) (@mcabbott)
- Narrower version of
@non_differentiable params
(#2118) (@mcabbott) - allow non-tuple data in the new train! (#2119) (@CarloLucibello)
- fix train! test (#2123) (@CarloLucibello)
- Move 5 tutorials from fluxml.github.io (#2125) (@mcabbott)
- Remove Flux.Data module (#2126) (@mcabbott)
- CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#2128) (@github-actions[bot])
v0.13.8
v0.13.7
Flux v0.13.7
Closed issues:
- DimensionMismatch("array could not be broadcast to match destination") (#1457)
- Warn on
NaN
loss (#1981) - Make
create_bias
a public API? (#2049) - Make
rng_from_array
non-differentiable (#2062) @autosize
does not work with semi-colon separated kwargs (#2086)- early_stopping does not work as expected (#2089)
Merged pull requests:
- Documentation headings & sections (#2056) (@mcabbott)
- Add a dark mode version of logo (#2063) (@Saransh-cpp)
- Fix a few crossrefs + update Zygote's page (#2064) (@Saransh-cpp)
- Make
rng_from_array
non differentiable (#2065) (@Saransh-cpp) - Add an example to the readme? (#2067) (@mcabbott)
- Add a quick start example, and change some headings (#2069) (@mcabbott)
- Stop training on Inf/NaN loss (#2070) (@mcabbott)
- Export
Embedding
(#2072) (@mcognetta) - Relax
RNN
/LSTM
/GRUCell
internal matrix type restrictions (#2073) (@mcognetta) - Finish docs for #2073 (#2075) (@mcognetta)
- Add
@autosize
(#2078) (@mcabbott) - Back to create_bias (#2081) (@Saransh-cpp)
- Simplify
Embedding
(#2084) (@mcabbott) - Fix
|> gpu
bug in@autosize
(#2085) (@mcabbott) - Fix #2086 re
@autosize
(#2087) (@mcabbott) - Use the standard Documenter.jl local redirect (#2093) (@ChrisRackauckas)
- CompatHelper: bump compat for MLUtils to 0.3, (keep existing compat) (#2095) (@github-actions[bot])
v0.13.6
Flux v0.13.6
Closed issues:
- OneHotArrays.jl? (#1544)
- [Discussion]: doctests, docstrings, documentation manual, and unclear internal API (for newcomers) (#1990)
- [Bug]: Swapped
alpha
andbeta
intversky
loss? (#1993) - [Discussion]: documentation for
@reexport
ed andimport
ed (orusing
) packages (#2038) - Pull request #2007 causes Flux.params() calls to not get cached (#2040)
- v0.13.5 breaks Flux.train! on a custom type (#2045)
- Bounds erro for Flux.reset! in loss function (#2057)
Merged pull requests:
- Miscellaneous docstring additions and fixes (#1998) (@Saransh-cpp)
- Use muladd for LSTM cell matmuls (#2023) (@ToucheSir)
- using OneHotArrays (#2025) (@mcabbott)
- mark
stop
,skip
,@epochs
as deprecated (#2027) (@mcabbott) - Fix the last remaining 404 errors (#2035) (@Saransh-cpp)
- Add ability to filter
loadmodel!
recursion (#2041) (@darsnack) - Mark
track_stats=true
as deprecated (#2042) (@akahard2dj) - Better docs for reexported packages (#2046) (@Saransh-cpp)
- Typo in BatchNorm number of channels assertion (#2047) (@Marcovela)
- Add extra test for params (#2051) (@christiangnrd)
- Restore some private functions (#2052) (@ToucheSir)
- Make params non-differentiable (Closes #2040 & #2048) (#2054) (@christiangnrd)
- Leftover changes from #2046 (#2055) (@Saransh-cpp)
unthunk
in some rules (#2058) (@mcabbott)- Fix the failing CI build (#2059) (@christiangnrd)
v0.13.5
Flux v0.13.5
Closed issues:
- PINN loss doesn't converge to 0? (#1966)
- Simple chaining compatibility check (#2017)
- v0.12.10 => v0.13.4 breaks
Dropout
on CUDA (#2018) - Wrong rrule dispatch for Array constructor (#2033)
Merged pull requests:
- Get rid of documentation warnings and 404 pages (#1987) (@Saransh-cpp)
- use Functors 0.3 in Flux (#2007) (@mcabbott)
- Typo (#2020) (@trigaten)
- Add
NNlib.grid_sample
(#2022) (@scheidan) - Remove CTC loss (moved to NNlib) (#2024) (@mcabbott)
- Fix typo in docs (#2030) (@svilupp)
- fix array constructor rrule (#2034) (@chengchingwen)
v0.13.4
Flux v0.13.4
Closed issues:
- Repository: on the addition of loss/distance functions and other niceties to Flux (#826)
trainable
for BatchNorm stops parameters from being saved and loaded (#1027)- Non-descriptive arg in
Conv
: whyfilter
intead ofsize
? (#1212) - Ada or ADA (#1949)
- Make
gpu(::DataLoader)
work or error loudly if it doesn't (#1974) - Conversion error when loading a model with v0.13+ with BSON (#1984)
- GPU broadcasting error when using softmax on GPU (#1994)
- Error when using CUDA (#1997)
- type cannot been referred with structured model function (#2000)
- [Broken Documentation] Dense(1 => 1) (#2001)
Merged pull requests:
- Fix slight typos in
LayerNorm
docs (#1975) (@theabhirath) - Piratical errors for two mistakes (#1976) (@mcabbott)
- Show
using Flux
before BSON@load
(#1977) (@JeffFessler) - Update docstrings of
basic.jl
andconv.jl
(#1978) (@Saransh-cpp) - Added Common GPU Workflows in Docs (#1980) (@lfenzo)
PairwiseFusion
layer, take 2 (#1983) (@theabhirath)- deprecations.jl: depwarn -> Base.depwarn (#1985) (@skleinbo)
- Update docstrings in
upsample.jl
,recurrent.jl
, andnormalise.jl
(#1995) (@Saransh-cpp) - replace ADAM with Adam and its variants thereof (#1996) (@Karthik-d-k)
- Make
Dropout
docs a little more user friendly (#2014) (@theabhirath)
v0.13.3
Flux v0.13.3
Merged pull requests: