This Flask app utilizes a trained model for sign language recognition using MediaPipe and TensorFlow/Keras.
-
Clone the repository:
git clone <repo-url> cd <repo-directory>
-
Install dependencies:
pip install -r requirements.txt
-
Run the Flask app:
python app.py
-
Access the app in your browser at
http://localhost:3000
.
- Visit the app in your browser.
- Upload a video file.
- Wait for the app to process the video and display the predicted actions.
- Interpret the predictions based on the recognized actions.
Note: The model currently recognizes the following 6 classes of actions:
- Drink
- Eat
- Goodbye
- Hello
- Help
- How are you
- Flask
- NumPy
- TensorFlow
- Matplotlib
- MediaPipe
- OpenCV (opencv-python-headless)
app.py
: Contains the Flask application code.requirements.txt
: Lists all Python dependencies required for the app.model_lstm_6_classes_0.98.h5
: Pre-trained LSTM model for action recognition.index.html
: HTML template for the web interface.