Sample iOS app for NMS-equipped YOLO object detection models (YOLOv9, YOLO11).
| Model | Size | Architecture | Year |
|---|---|---|---|
| YOLOv9s | 14 MB | PGI + GELAN, CoreML NMS pipeline | 2024 |
| YOLO11s | 18 MB | Improved backbone/neck, CoreML NMS pipeline | 2024 |
These models include a built-in CoreML NMS pipeline and output VNRecognizedObjectObservation via the Vision framework.
- Camera — Real-time detection with FPS/latency stats
- Photo — Pick an image from library and run detection
- Video — Frame-by-frame detection with progress bar
- Download a model from the links above
- Unzip and drag the
.mlpackageinto the Xcode project - Build and run on a physical device (iOS 16+)
Model output: Confidence (0 x 80) + Coordinates (0 x 4) via CoreML NMS pipeline. The Vision framework returns VNRecognizedObjectObservation with bounding boxes and class labels directly.