Releases: marcpinet/neuralnetlib
Releases · marcpinet/neuralnetlib
neuralnetlib 3.3.7
- docs: remove useless comments
- fix(Transformer): too much things to tell
- feat: even more precise floating point for metrics and loss
- refactor: special tokens now passed via init for Transformer
- feat: enhance beam search and token prediction mechanisms
- docs: update readme
- fix(Transformer): vanishing gradient fix
- fix(Transformer): still on it (wip)
- fix(Transformer): another fix
- fix(Transformer): special token indices
- fix(Transformer): normalization IS the issue
- docs: update readme
- fix(Transformer): cross attention weights
- fix: LearningRateScheduler
- fix: LearningRateScheduler
- fix: normalization in data preparation
- fix: different vocab size for different tokenizations
- fix(PositionalEncoding): scaling
- fix(AddNorm): better normalization
- fix(TransformerEncoderLayer): huge improvements
- perf(SequenceCrossEntropy): add vectorization
- fix(Tokenizer+Transformer): tokenization alignement for special tokens
- fix(transformer): investigate and address gradient instability and explosion
- fix(sce): label smoothing
- refactor: gradient clipping
- fix(Transformer): gradient explosion
- fix(Transformer): tokens padding and max sequence
- test: tried with a better dataset
- fix(sce): y_pred treated as logits instead of probs
- fix(TransformerEncoderLayer): remove arbitrary scaling
- fix(Transformer): sce won't ignore sos and eos tokens
- fix: sce extending lossfunction
- fix(sce): softmax not necessary
- feat: add BLEU, ROUGE-L and ROUGE-N scores
- fix: validation data in fit method and shuffle in train_test_split
- docs: modifies example to use validation split and bleu score
- fix(PositionalEncoding): better positional scaling
- ci: bump version to 3.3.7
neuralnetlib 3.3.6
- feat: add Transformer model and layer architecture (wip)
- fix(Transformer): gradient propagation between layers
- fix(Transformer): tokenization, sequence handling and shapes
- fix(callbacks): now compatible with every model architecture
- fix_later: find why the Transformer output won't work
- ci: bump version to 3.3.6
neuralnetlib 3.3.5
- feat(autoencoder): add VAE image generation
- refactor: imports organization
- refactor: examples folder tree organization
- docs: fix typo
- feat(preprocessing): add ImageDataGenerator
- ci: bump version to 3.3.5
neuralnetlib 3.3.4
- docs: update readme
- docs: remove useless comments
- fix(convolution): stride parameter
- feat(layer): add UpSampling2D"
- docs: update readme
- perf: changed NCHW to NHWC for CPU efficiency
- docs: update readme
- perf: switch from NCL to NLC for CPU efficiency
- ci: bump version to 3.3.4
neuralnetlib 3.3.3
- fix(example): weight init
- docs(examples): fresh run
- docs: update readme
- fix(layers): encoder and decoder layers
- fix(conv2d): align output shape calculation between im2col and convolve
- ci: bump version to 3.3.3
neuralnetlib 3.3.2
- fix(model): save method
- docs: update readme
- docs: update readme
- feat(autoencoder): add variational autoencoder (VAE)
- ci: bump version to 3.3.2
neuralnetlib 3.3.1
- docs: update todo
- feat(preprocessing): add cosine similarity
- docs: update todo
- feat(callbacks): add LearningRateScheduler
- ci: bump version to 3.3.1
neuralnetlib 3.3.0
- docs: update readme
- docs: update readme
- docs: update readme
- docs: remove useless comments
- docs: update readme
- docs: update readme
- refactor: code cleanup and formatting
- fix(config): layers and model config
- fix(metrics): pr auc and roc auc
- refactor(PCA): add explained variance ratio
- feat(Model): add autoencoder model
- feat(preprocessing): add t-SNE
- feat(layers): update compatibility dict
- ci: bump version to 3.3.0
neuralnetlib 3.2.2
- docs: update readme
- refactor: change model -> models for future implementations
- ci: bump version to 3.2.1
- ci: bump version to 3.2.2
neuralnetlib 3.2.1
- docs: update readme
- refactor: change model -> models for future implementations
- ci: bump version to 3.2.1