Skip to content

krishnarg04/TorchLite

Repository files navigation

Tiny Torch

A lightweight C++ implementation of a deep learning framework inspired by PyTorch. This library provides tensor operations, automatic differentiation, neural network components, and optimization algorithms for building and training neural networks.

Features

  • Tensor Operations: Multi-dimensional tensor support with broadcasting, element-wise operations, matrix multiplication, and shape manipulations
  • Automatic Differentiation: Compute gradients automatically with a dynamic computation graph
  • Neural Network Modules: Building blocks for creating neural networks including Linear layers
  • Optimizers: SGD and Adam optimizers for gradient-based optimization
  • Activation Functions: ReLU, Sigmoid, Tanh, and Softmax
  • Loss Functions: MSE, Cross-Entropy, and Binary Cross-Entropy
  • Data Loading: CSV parsing utilities for loading structured data
  • Convolution Operations: Support for 1D and 2D convolutions

Example Usage

#include <iostream>
#include "tensor.h"
#include "data_prepare.cpp"
#include "TensorOps.cpp"

// Define a simple neural network
class IrisModel : public tensor::nn::Module {
public:
    Module* linear1;
    Module* linear2;
    Module* linear3;

    IrisModel() {
        linear1 = this->addModule("linear1", new tensor::nn::Linear(4, 10));
        linear2 = this->addModule("linear2", new tensor::nn::Linear(10, 5));
        linear3 = this->addModule("linear3", new tensor::nn::Linear(5, 3));
    }

    tensor::Tensor* forward(tensor::Tensor* x) {
        return tensor::softmax(linear3->forward(
            tensor::relu(linear2->forward(
                tensor::relu(linear1->forward(x))))), 1);
    }
};

int main() {
    // Load data
    csv_parser::DataFrame* df = new csv_parser::DataFrame("iris.csv");
    tensor::dataset::DatasetCsvFile* ds = new tensor::dataset::DatasetCsvFile(df, 16);
    ds->getOneHot();
    
    // Create model
    tensor::nn::Module* model = new IrisModel();
    model->setGrad(true);
    
    // Create optimizer
    tensor::nn::optim::Adam* optimizer = new tensor::nn::optim::Adam(0.01, 0.9, 0.999, 1e-8, model->getParams());
    
    // Training loop
    for(int epoch = 0; epoch < 100; epoch++) {
        std::vector<tensor::Tensor*> batch = ds->getbatch();
        while(batch.size() != 0) {
            optimizer->zeroGrad();
            tensor::Tensor* output = model->forward(batch[0]);
            tensor::Tensor* loss = tensor::crossentropy(output, batch[1]);
            
            loss->backward();
            optimizer->step();
            
            batch = ds->getbatch();
        }
    }
    
    // Save model
    tensor::save(model->getStateDict(), "model.txt");
    
    return 0;
}

Building and Running

To build the project:

g++ main.cpp -o main -std=c++17

Key Components

  • tensor.h/tensor.cpp: Core tensor implementation with automatic differentiation
  • nn.cpp: Neural network modules implementation
  • operator.cpp: Tensor operations like addition, multiplication, etc.
  • tensormathematical.cpp: Mathematical operations on tensors
  • tensorops.cpp: Common tensor operations like concat, squeeze, etc.
  • conv.cpp: Convolution operations
  • csv_parser.h/csv_parser.cpp: CSV file parsing utilities
  • data_prepare.cpp: Dataset utilities for loading and batch processing

License

This project is open source.

Contributions

Contributions to Tiny Torch are welcome! If you'd like to help improve this project, here's how you can contribute:

  • Bug Reports: Open an issue describing the bug and steps to reproduce it
  • Feature Requests: Suggest new features or improvements via issues
  • Code Contributions: Submit pull requests for bug fixes or new features
  • Documentation: Help improve or expand the documentation
  • Examples: Create and share example implementations using Tiny Torch

When contributing code, please follow the existing code style and include appropriate tests for your changes.