- 📫 How to reach me [email protected]
Monitoring models after deployed into production is one of the most major task when anyone wants to serve a machine learning model to the world. The way model performed in test data doesnt mean it will perform the same for new user data. Data drift and concept drift may occur and decrease the model performence in production.So it's necessary to monitor continuously and create metrics for error analysis then take action accordingly to model. N.B: All the models and weights here are kept dummy as it's sensitive and not shareable.
Clone the project
git clone https://github.com/skfaysal/Model-testing-and-monitoring-pipeline.git
Go to the project directory
cd Model-testing-and-monitoring-pipeline
Create virtual environment using environement.yml
conda env create -f environment.yml
Activate environment
conda activate heat_map
For Training Model. We will pass parameters using CLI
python3 TestModel_cli.py --drmodel models/b5_newpreprocessed_full_fold4.h5
--lfmodel models/model_binary_right_leaft_retina.h5
--imgdata eyepacs_train --savepath output/
For generating confusion matrix and save missclassified images
cd confusionMatrix
python3 main.py