Goal: Research the Impact of Imbalanced and Balanced Maritime Code Flag Datasets on the Performance of One-Stage Image Detectors (YOLO family, SSD).
Main metrics:
- intersection over union (IoU),
- precision and recall,
- average precision (AP),
- mean average precision (mAP),
- F1 score (trade-off between precision and recall).
YOLO detectors:
- YOLOv11 (Yolov11m ~ Medium version) - latest stable version (REQUIRED FOR RESEARCH),
- YOLOv8 (Yolov8m ~ Medium version) (REQUIRED FOR RESEARCH).
SSD detectors:
- SSD300 - latest stable version (REQUIRED FOR RESEARCH),
- SSD512 - bigger resoultions (ADDITIONAL FOR RESEARCH).
βββ documentation <- UML diagrams and configuration
βββ balancers <- Package with balancers and utils
β βββ __init__.py <- Package identicator
β βββ smote.py <- SMOTE balancer (interpolation)
β βββ adasyn.py <- ADASYN balancer (interpolation)
β βββ augmentation.py <- Augmentation balancer (augmenting images like rotations, etc.)
β βββ autoencoder.py <- Autoencoder balancer (learning needed!)
β βββ dgan.py <- DGAN balancer (learning needed!)
β βββ balancer.py <- General balancer with all balancers (aggregating all of the above)
β βββ annotations.py <- Annotations module
β βββ configuration_reader.py <- Balancer configuration reader
βββ maritime-flags-dataset <- Source and balanced flags (A-Z)
β βββ ADASYN_balanced_flags <- Balanced flags by using ADASYN balancer
β βββ SMOTE_balanced_flags <- Balanced flags by using SMOTE balancer
β βββ AUGMENTATION_balanced_flags <- Balanced flags by using Augmentation balancer
β βββ DGAN_balanced_flags <- Balanced flags by using DGAN balancer
β βββ AE_balanced_flags <- Balanced flags by using Autoencoder balancer
β βββ combined_flags <- Combined/test images
β βββ two_flags <- Balanced two flags (A and B) per 1000 images
β βββ imbalanced_flags <- Source folder with imbalanced flags
βββ datasets <- YOLO formatted datasets (detector by default sees this category!)
β βββ yolo-maritime-flags-dataset (A-Z)
β βββ images
β βββ train <- Training images (.jpg)
β βββ val <- Validation images (.jpg)
β βββ test <- Testing images (.jpg)
β βββ labels
β βββ train <- Training labels (.txt)
β βββ val <- Validation labels (.txt)
β βββ test <- Testing labels (.txt)
β βββ cross-validation-yolo-formatted-maritime-flags (A-Z)
β βββ images
β βββ fold_1 <- First fold with images (.jpg)
| βββ train <- Training images (.jpg)
| βββ val <- Validation images (.jpg)
β βββ ... <- ... fold with images (.jpg)
| βββ train <- Training images (.jpg)
| βββ val <- Validation images (.jpg)
β βββ fold_n <- N-fold with images (.jpg)
| βββ train <- Training images (.jpg)
| βββ val <- Validation images (.jpg)
β βββ labels
β βββ fold_1 <- First fold with labels (.txt)
| βββ train <- Training labels (.txt)
| βββ val <- Validation labels (.txt)
β βββ ... <- ... fold with labels (.txt)
| βββ train <- Training labels (.txt)
| βββ val <- Validation labels (.txt)
β βββ fold_n <- N-fold with labels (.txt)
| βββ train <- Training labels (.txt)
| βββ val <- Validation labels (.txt)
βββ .gitignore <- Ignores venv_environment directory to be pushed (VENV)
βββ test_packages.py <- Testing loading all necessaries packages like Torch (VENV)
βββ python_3.11_venv_requirements.txt <- List for venv with all used packages (VENV)
βββ balance.py <- Balancing dataset by using balancers package (BALANCING)
βββ balancer_configuration.json <- Balancer configuration
βββ detection.py <- Training and testing yolo detector with balanced/imbalanced data (EVALUATING)
βββ yolo_detector.py <- YOLO detector (DETECTING)
βββ yolo_data.yaml <- YOLO data configuration (traing and testing)
βββ fold_1_dataset.yaml <- YOLO first k-fold data configuration (K-cross validation)
βββ ... <- YOLO ... k-fold data configuration (K-cross validation)
βββ fold_n_dataset.yaml <- YOLO n k-fold data configuration (K-cross validation)
















