Skip to content

mirzayasirabdullahbaig07/Advanced-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“˜ Advanced Deep Learning

This repository documents my 100 Days of Deep Learning Journey, inspired by the Mirza Yasir Abdullah Baig.
The repo contains notes, code implementations, and projects covering ANNs, CNNs, RNNs, LSTMs, Transformers, and Large Language Models (LLMs).


πŸ—‚οΈ Table of Contents

01: Introduction & Perceptron

  • What is Deep Learning?
  • Neural Networks vs Machine Learning
  • Perceptron: Intuition & Training
  • Loss Functions: Hinge Loss, Binary Cross-Entropy, Sigmoid
  • Multi-Layer Perceptrons (MLPs) - Notation
  • Multi-Layer Perceptrons (MLPs) - Intitution

02: Forward & Backpropagation

  • Forward Propagation in Neural Networks
  • Loss Functions
  • Backpropagation (The What, The How, The Why)
  • Gradient Descent (Batch, Stochastic, Mini-batch)
  • Vanishing & Exploding Gradients
  • Performance Improvements: Early Stopping, Dropout, Regularization

03: Neural Network Essentials

  • Activation Functions: Sigmoid, Tanh, ReLU, Variants (Leaky, ELU, SELU)
  • Weight Initialization Techniques (Xavier, He, Glorot)
  • Batch Normalization
  • Optimizers: SGD, Momentum, NAG, RMSProp, Adam
  • Hyperparameter Tuning (Keras Tuner)

04: Convolutional Neural Networks (CNNs)

  • CNN Intuition & Visual Cortex
  • Convolution Operation, Padding & Strides
  • Pooling Layers (MaxPooling, AvgPooling)
  • CNN Architectures (LeNet-5, AlexNet, VGG)
  • Backpropagation in CNNs
  • Projects: Cat vs Dog Classifier, MNIST Digit Classifier
  • Data Augmentation & Transfer Learning

05: Recurrent Neural Networks (RNNs & LSTMs)

  • RNNs: Architecture & Forward Propagation
  • Backpropagation Through Time (BPTT)
  • Problems with RNNs (Long-term Dependencies)
  • LSTMs (Long Short-Term Memory) – The What, The How, The Why
  • GRUs (Gated Recurrent Units)
  • Stacked & Bidirectional RNNs/LSTMs/GRUs
  • Projects: Next Word Predictor, Sentiment Analysis

06: Transformers & Attention

  • Encoder-Decoder Architecture
  • Attention Mechanisms (Bahdanau, Luong)
  • Self-Attention & Multi-Head Attention
  • Positional Encoding
  • Layer Normalization vs Batch Normalization
  • Transformer Architecture (Encoder & Decoder)
  • Masked Self-Attention, Cross Attention
  • Projects: Machine Translation with Seq2Seq + Attention

07: Advanced Topics & LLMs

  • History of Large Language Models (LSTMs β†’ Transformers β†’ GPT)
  • Transformer Inference & Decoding Strategies
  • Fine-tuning Pretrained Models (BERT, GPT, Vision Transformers)
  • Transfer Learning in NLP & CV
  • Final Project: End-to-End Deep Learning Project

About

Hands-on implementations of advanced deep learning architectures and techniques. Focused on real-world applications, optimization, and research-level concepts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages