Skip to content

This repository implements a Deep Convolutional Generative Adversarial Network (DCGAN) to generate synthetic handwritten digit images similar to those in the MNIST dataset.

Notifications You must be signed in to change notification settings

sowada23/Mnist-DCGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 

Repository files navigation

DCGAN for MNIST Data Generation

Overview

This repository implements a Deep Convolutional Generative Adversarial Network (DCGAN) to generate synthetic handwritten digit images similar to those in the MNIST dataset.

Project Description

The goal is to train a DCGAN that learns the distribution of the MNIST dataset and produces realistic handwritten digits. The implementation consists of two main neural networks:

  • Generator: Converts a random noise vector into a 28×28 grayscale image.
  • Discriminator: Distinguishes between real images from the MNIST dataset and images generated by the Generator.

The adversarial training process allows both models to improve over time—ultimately resulting in a generator capable of producing plausible digits.

Implementation Details

  • Language & Framework: Python 3.12 with TensorFlow 2.x (using Keras API)

  • Training Parameters:

    • Epochs: 200
    • Loss Function: Binary Cross-Entropy is used for both generator and discriminator losses
    • Optimizers: Adam optimizer with customized learning rates and decay schedules
  • Data: Utilized MNIST dataset. MNIST is a dataset of 70,000 28×28 grayscale images of handwritten digits, split into 60,000 training and 10,000 test samples. It is widely used as a benchmark for image classification and deep learning experiments due to its simplicity and standardized format. MNIST DATA

  • Generator

    def build_generator():
        model = tf.keras.Sequential([
            layers.Dense(7*7*256, use_bias=False, input_shape=(100,)),  # Input: Noise vector
            layers.Reshape((7, 7, 256)),  # Reshape to small feature map
            layers.BatchNormalization(),
            layers.LeakyReLU(alpha=0.2),

            layers.Conv2DTranspose(128, (4,4), strides=(2,2), padding='same', use_bias=False),
            layers.BatchNormalization(),
            layers.LeakyReLU(alpha=0.2),

            layers.Conv2DTranspose(64, (4,4), strides=(2,2), padding='same', use_bias=False),
            layers.BatchNormalization(),
            layers.LeakyReLU(alpha=0.2),
            
            layers.Conv2DTranspose(1, (4,4), strides=1, padding="same", activation="tanh")
        ])

Generator Architecture

  • Descriminator
    def build_discriminator():
    base_model = keras.Sequential([
        layers.Conv2D(6, kernel_size=5, strides=1, padding="valid", input_shape=(28, 28, 1)),
        layers.LeakyReLU(0.2),
        layers.AveragePooling2D(pool_size=(2, 2)),

        layers.Conv2D(16, kernel_size=5, strides=1, padding="valid"),
        layers.LeakyReLU(0.2),
        layers.AveragePooling2D(pool_size=(2, 2)),

        layers.Flatten(),
        layers.Dense(120, activation="relu"),
        layers.Dense(84, activation="relu"),
        layers.Dense(1),  # Output probability (Real or Fake)
    ])

Descriminator Architecture

Results

Final Epoch Grid Training Gif

Loss Function Over 200 Epochs

The loss plot below depicts the training progress, showing the generator and discriminator losses throughout 200 epochs.

Loss Plot

Reference

About

This repository implements a Deep Convolutional Generative Adversarial Network (DCGAN) to generate synthetic handwritten digit images similar to those in the MNIST dataset.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages