Skip to content

letslockin/letslockin.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

97 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ”’ LetsLockIn

LetsLockIn Demo

AI-Powered Focus & Posture Assistant

๐ŸŒŸ Overview

Welcome to LetsLockIn. This website is a GREAT tool for the ones who want to lock-in and focus on their work.

If you are seeking a tool that helps you focus, you are in the right place.

Now, Let's Lock In!

๐Ÿš€ Visit Our Website

For more information, explore our official website:
letslockin.dev

  • Scroll down for features

Our mission is to create a healthier and more productive work environment by leveraging advanced AI models to monitor focus, posture, and fatigue in real timeโ€”all while respecting user privacy.

โ—๏ธโ—๏ธโ—๏ธโ›”๏ธ SUPPORTED BROWSERS:

ALL BROWSERS ARE SUPPORTED.

  • Not compatible with old versions of Chromium-based browsers (Chrome, Edge, etc.).
  • Only compatible with the latest versions.

โœ… Demo

PLEASE CLICK THE UNMUTE BUTTON AT THE BOTTOM RIGHT CORNER OF THE DEMO VIDEO.

  • Reload the page if the video does not show up.
demo2.mp4

๐ŸŽฏ Features

โšก Productivity Improvements

  • Forces good postures ==> Makes user lock-in.
  • Significantly boost productivity.
  • Tracks real-time focus and productivity trends
  • Prevents burnout through early detection of overworking
  • Increases concentration with actionable feedback

๐Ÿง˜โ€โ™‚๏ธ Health Benefits

  • Encourages better posture to prevent back and neck strain
  • Prevents overworking
  • Promotes healthier work-rest balance

๐Ÿ–ฅ๏ธ User Experience

  • Clean, simple, and modern design

๐Ÿ”’ Privacy-Focused

  • All processing happens locally on your device
  • No sensitive data leaves your browser, ensuring privacy

๐Ÿ“ˆ Technical Specifications

Core AI Model

  • Framework: TensorFlow 2.15.0, tensorflow-decision-forests 1.8.0

    โš ๏ธ Important: TensorFlow.js conversion is only supported for models trained with TensorFlow 2.15.0 or lower (I trained the model using Tensorflow 2.15.0). Newer versions such as 2.17.0 are not compatible for deployment.

  • Architecture: ResNet-based deep neural network
  • Dataset Details:
    • Training: AffectNet, RAF-DB, CK+
    • Verification: Filtered subsets of the same datasets

Training Details

  • Challenges Addressed:
    • Overfitting mitigated using dropout and data augmentation.
    • Gradient vanishing and exploding resolved with residual connections.
    • Optimized for real-time web deployment by reducing model size and complexity.

Directory Structureโš ๏ธ๐Ÿ”‘โœ…

  • v2_afrafck_model:
    • Short for version 2_affectnet_Raf-DB_CK+_model

    • Includes:

      • Training ipynb and validating ipynb files
      • accuracy reports,
      • normalized accuracy reports,
      • confusion matrices,
      • normalized confusion matrices,
      • Paramerters report, log, final .h5 model,
      • final model architecture (json)
    • IMPORTANT FILES (SHOULD READ):

      letslockin_train.ipynb
      
      confusion_matrix.ipynb
      
      Norm_RAF_DB_accuracy_report.txt
      (normalized RAF-DB validation accuracy report)
      
      Norm_RAF_DB_confusion_matrix.png
      
      model_performance.png
  • tf2js_model:
    • Short for Tensorflow to JS model
    • Includes:
      • .Bin file
      • .Json file
  • HTML, CSS, SCRIPT

๐Ÿ› ๏ธ Technical Deep Dive

Neural Network Architecture

Our Complex Residual Network consists of 41 carefully designed layers:

flowchart TD
    input1[Input Layer 160x160x1] --> conv2d[Conv2D 32]
    conv2d --> bn1[BatchNorm]
    bn1 --> act1[ReLU]
    act1 --> maxpool[MaxPool2D]
    maxpool --> drop1[Dropout 0.5]

    %% First Block
    drop1 --> conv2d_1[Conv2D_1 64]
    drop1 --> conv2d_3[Conv2D_3 64]
    conv2d_1 --> bn2[BatchNorm_1]
    bn2 --> act2[ReLU_1]
    act2 --> conv2d_2[Conv2D_2 64]
    conv2d_2 --> bn3[BatchNorm_2]
    conv2d_3 --> bn4[BatchNorm_3]
    bn3 & bn4 --> add1[Add_1]
    add1 --> act3[ReLU_2]
    act3 --> drop2[Dropout 0.5]

    %% Second Block
    drop2 --> conv2d_4[Conv2D_4 128]
    drop2 --> conv2d_6[Conv2D_6 128]
    conv2d_4 --> bn5[BatchNorm_4]
    bn5 --> act4[ReLU_3]
    act4 --> conv2d_5[Conv2D_5 128]
    conv2d_5 --> bn6[BatchNorm_5]
    conv2d_6 --> bn7[BatchNorm_6]
    bn6 & bn7 --> add2[Add_2]
    add2 --> act5[ReLU_4]
    act5 --> drop3[Dropout 0.5]

    %% Third Block
    drop3 --> conv2d_7[Conv2D_7 128]
    drop3 --> conv2d_9[Conv2D_9 128]
    conv2d_7 --> bn8[BatchNorm_7]
    bn8 --> act6[ReLU_5]
    act6 --> conv2d_8[Conv2D_8 128]
    conv2d_8 --> bn9[BatchNorm_8]
    conv2d_9 --> bn10[BatchNorm_9]
    bn9 & bn10 --> add3[Add_3]
    add3 --> act7[ReLU_6]
    act7 --> drop4[Dropout 0.5]

    %% Final Layers
    drop4 --> globalpool[Global AvgPool2D]
    globalpool --> dense1[Dense 3072]
    dense1 --> act8[ReLU_7]
    act8 --> drop5[Dropout 0.5]
    drop5 --> dense2[Dense 7 classes]
    dense2 --> output[Output Softmax 7 classes]

    %% Modern Color Styling
    classDef input fill:#4A90E2,stroke:#333,stroke-width:2px,color:white
    classDef conv fill:#2C3E50,stroke:#333,stroke-width:2px,color:white
    classDef bn fill:#E74C3C,stroke:#333,stroke-width:2px,color:white
    classDef act fill:#3498DB,stroke:#333,stroke-width:2px,color:white
    classDef pool fill:#F39C12,stroke:#333,stroke-width:2px,color:white
    classDef drop fill:#1ABC9C,stroke:#333,stroke-width:2px,color:white
    classDef add fill:#9B59B6,stroke:#333,stroke-width:2px,color:white
    classDef dense fill:#E67E22,stroke:#333,stroke-width:2px,color:white
    classDef output fill:#16A085,stroke:#333,stroke-width:2px,color:white

    class input1 input
    class conv2d,conv2d_1,conv2d_2,conv2d_3,conv2d_4,conv2d_5,conv2d_6,conv2d_7,conv2d_8,conv2d_9 conv
    class bn1,bn2,bn3,bn4,bn5,bn6,bn7,bn8,bn9,bn10 bn
    class act1,act2,act3,act4,act5,act6,act7,act8 act
    class maxpool,globalpool pool
    class drop1,drop2,drop3,drop4,drop5 drop
    class add1,add2,add3 add
    class dense1,dense2 dense
    class output output
Loading

Performance Metrics

  • Validation Accuracy: 72.852%
  • Real-time Processing: Up to 60 FPS, but I set a limit of 10 FPS when on the tab, 8 FPS when using other tabs or when the browser is minimized to save battery life.
  • Browser Compatibility: 98% (Safari is very strict in allowing js to run in the background, so even though I set the FPS to 8 FPS, it may drop under 8 FPS. For other browsers, it works decently without significant battery impact)

๐Ÿ’ป Quick Start

# Clone the repository
git clone https://github.com/letslockin/letslockin.github.io.git

# Navigate to project directory
cd letslockin.github.io

# Start local server (Python 3)
python -m http.server  (Windows OS)
python3 -m http.server (MacOS)

# Go to your browser and open
http://localhost:8000

โš ๏ธ Important Note: This model must be trained with TensorFlow 2.15.0 or older versions of Tensorflow (2+) as TensorFlow.js conversion is not compatible with newer versions of TensorFlow like version 2.17.0

๐Ÿ“Š Results & Achievements

  • High Accuracy Emotion Recognition
    • 72.852% accuracy achieved using our 41-layer ResNet architecture
    • Validated across multiple datasets
  • Extensive Training Data
    • 297,074 filtered training images
    • 6,568 validation images
  • Real-time Performance
    • 30 FPS processing rate, 8 FPS background processing rate (could be higher up to 50 FPS but I set 8 FPS to maximize battery life)
    • 98% browser compatibility

๐Ÿ‘ค Author


Bach Pham
AI Engineer
@2006coder

โš ๏ธ Disclaimer

This application is designed to assist with focus and posture improvement. It is not a medical device and should not be used as a substitute for professional medical advice.

๐Ÿ“„ License

Copyright ยฉ 2024 LetsLockIn. All rights reserved.


๐ŸŒŸ Star us on GitHub

Support our mission to revolutionize productivity and well-being!

GitHub stars
Made with โค๏ธ by Bach

About

AI-powered posture and focus monitoring for optimal learning experience

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •