Practical demonstration on leveraging computer vision for analyzing wait times and monitoring the duration that objects or individuals spend in predefined areas of video frames. This example project, perfect for retail analytics or traffic management applications.
checkout-time-in-zone.mp4
-
clone repository and navigate to example directory
git clone --depth 1 -b develop https://github.com/roboflow/supervision.git cd supervision/examples/time_in_zone -
setup python environment and activate it [optional]
uv venv source .venv/bin/activate -
install required dependencies
uv pip install -r requirements.txt
This script allows you to download a video from YouTube.
--url: The full URL of the YouTube video you wish to download.--output_path(optional): Specifies the directory where the video will be saved.--file_name(optional): Sets the name of the saved video file.
python scripts/download_from_youtube.py \
--url "https://www.youtube.com/watch?v=-8zyEwAa50Q" \
--output_path "data/checkout" \
--file_name "video.mp4"python scripts/download_from_youtube.py \
--url "https://www.youtube.com/watch?v=MNn9qKG2UFI" \
--output_path "data/traffic" \
--file_name "video.mp4"This script allows you to stream video files from a directory. It's an awesome way to
mock a live video stream for local testing. Video will be streamed in a loop under
rtsp://localhost:8554/live0.stream URL. This script requires docker to be installed.
--video_directory: Directory containing video files to stream.--number_of_streams: Number of video files to stream.
python scripts/stream_from_file.py \
--video_directory "data/checkout" \
--number_of_streams 1python scripts/stream_from_file.py \
--video_directory "data/traffic" \
--number_of_streams 1If you want to test zone time in zone analysis on your own video, you can use this script to design custom zones and save results as a JSON file. The script will open a window where you can draw polygons on the source image or video file. The polygons will be saved as a JSON file.
--source_path: Path to the source image or video file for drawing polygons.--zone_configuration_path: Path where the polygon annotations will be saved as a JSON file.enter- finish drawing the current polygon.escape- cancel drawing the current polygon.q- quit the drawing window.s- save zone configuration to a JSON file.
python scripts/draw_zones.py \
--source_path "data/checkout/video.mp4" \
--zone_configuration_path "data/checkout/config.json"python scripts/draw_zones.py \
--source_path "data/traffic/video.mp4" \
--zone_configuration_path "data/traffic/config.json"design_zones.mp4
Script to run object detection on a video file using the Roboflow Inference model.
--zone_configuration_path: Path to the zone configuration JSON file.--source_video_path: Path to the source video file.--model_id: Roboflow model ID.--classes: List of class IDs to track. If empty, all classes are tracked.--confidence_threshold: Confidence level for detections (0to1). Default is0.3.--iou_threshold: IOU threshold for non-max suppression. Default is0.7.
python inference_file_example.py \
--zone_configuration_path "data/checkout/config.json" \
--source_video_path "data/checkout/video.mp4" \
--model_id "rfdetr-medium" \
--classes "[0]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7 \
--roboflow_api_key "ROBOFLOWS_API_KEY"checkout-time-in-zone.mp4
python inference_file_example.py \
--zone_configuration_path "data/traffic/config.json" \
--source_video_path "data/traffic/video.mp4" \
--model_id "rfdetr-medium" \
--classes "[2, 5, 6, 7]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7 \
--roboflow_api_key "ROBOFLOWS_API_KEY"traffic-time-in-zone.mp4
Script to run object detection on an RTSP stream using Roboflow Inference model.
--zone_configuration_path: Path to the zone configuration JSON file.--rtsp_url: Complete RTSP URL for the video stream.--model_id: Roboflow model ID.--classes: List of class IDs to track. If empty, all classes are tracked.--confidence_threshold: Confidence level for detections (0to1). Default is0.3.--iou_threshold: IOU threshold for non-max suppression. Default is0.7.
python inference_stream_example.py \
--zone_configuration_path "data/checkout/config.json" \
--rtsp_url "rtsp://localhost:8554/live0.stream" \
--model_id "rfdetr-medium" \
--classes "[0]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7python inference_stream_example.py \
--zone_configuration_path "data/traffic/config.json" \
--rtsp_url "rtsp://localhost:8554/live0.stream" \
--model_id "rfdetr-medium" \
--classes "[2, 5, 6, 7]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7Script to run object detection on a video file using the RF-DETR model.
--zone_configuration_path: Path to the zone configuration JSON file.--source_video_path: Path to the source video file.--model_size: Size of RF-DETR model ('nano', 'small', 'medium', 'base' or 'large'). Default is 'medium'.--device: Computation device ('cpu', 'mps' or 'cuda'). Default is 'cpu'.--classes: List of class IDs to track. If empty, all classes are tracked.--confidence_threshold: Confidence level for detections (0to1). Default is0.3.--iou_threshold: IOU threshold for non-max suppression. Default is0.7.--resolution: Resolution for the model input. Default is640.
python rfdetr_file_example.py \
--zone_configuration_path "data/checkout/config.json" \
--source_video_path "data/checkout/video.mp4" \
--model_size "medium" \
--device="cpu" \
--classes "[1]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7 \
--resolution 640python rfdetr_file_example.py \
--zone_configuration_path "data/traffic/config.json" \
--source_video_path "data/traffic/video.mp4" \
--model_size "medium" \
--device="cpu" \
--classes "[3, 6, 7, 8]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7 \
--resolution 640Script to run object detection on an RTSP stream using the RF-DETR model.
--zone_configuration_path: Path to the zone-configuration JSON file defining the polygons.--rtsp_url: Complete RTSP URL of the live video stream.--model_size: RF-DETR backbone size to load β choose from 'nano', 'small', 'medium', 'base', or 'large' (default 'medium').--device: Compute device to run the model on ('cpu', 'mps', or 'cuda'; default 'cpu').--classes: Space-separated list of class IDs to track. Leave empty to track all classes.--confidence_threshold: Minimum confidence score for a detection to be kept, range 0β1 (default 0.3).--iou_threshold: IOU threshold applied during non-max suppression (default 0.7).--resolution: Shortest-side input resolution supplied to the model. The script will round it to the nearest valid multiple (default 640).
python rfdetr_stream_example.py \
--zone_configuration_path "data/checkout/config.json" \
--rtsp_url "rtsp://localhost:8554/live0.stream" \
--model_size "medium" \
--device "cpu" \
--classes "[1]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7 \
--resolution 640python rfdetr_stream_example.py \
--zone_configuration_path "data/traffic/config.json" \
--rtsp_url "rtsp://localhost:8554/live0.stream" \
--model_size "medium" \
--device "cpu" \
--classes "[3, 6, 7, 8]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7 \
--resolution 640Script to run object detection on a video file using the Ultralytics YOLOv8 model.
--zone_configuration_path: Path to the zone configuration JSON file.--source_video_path: Path to the source video file.--weights: Path to the model weights file. Default is'yolov8s.pt'.--device: Computation device ('cpu','mps'or'cuda'). Default is'cpu'.--classes: List of class IDs to track. If empty, all classes are tracked.--confidence_threshold: Confidence level for detections (0to1). Default is0.3.--iou_threshold: IOU threshold for non-max suppression. Default is0.7.
python ultralytics_file_example.py \
--zone_configuration_path "data/checkout/config.json" \
--source_video_path "data/checkout/video.mp4" \
--weights "yolov8x.pt" \
--device "cpu" \
--classes "[0]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7python ultralytics_file_example.py \
--zone_configuration_path "data/traffic/config.json" \
--source_video_path "data/traffic/video.mp4" \
--weights "yolov8x.pt" \
--device "cpu" \
--classes "[2, 5, 6, 7]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7Script to run object detection on a video stream using the Ultralytics YOLOv8 model.
--zone_configuration_path: Path to the zone configuration JSON file.--rtsp_url: Complete RTSP URL for the video stream.--weights: Path to the model weights file. Default is'yolov8s.pt'.--device: Computation device ('cpu','mps'or'cuda'). Default is'cpu'.--classes: List of class IDs to track. If empty, all classes are tracked.--confidence_threshold: Confidence level for detections (0to1). Default is0.3.--iou_threshold: IOU threshold for non-max suppression. Default is0.7.
python ultralytics_stream_example.py \
--zone_configuration_path "data/checkout/config.json" \
--rtsp_url "rtsp://localhost:8554/live0.stream" \
--weights "yolov8x.pt" \
--device "cpu" \
--classes "[0]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7python ultralytics_stream_example.py \
--zone_configuration_path "data/traffic/config.json" \
--rtsp_url "rtsp://localhost:8554/live0.stream" \
--weights "yolov8x.pt" \
--device "cpu" \
--classes "[2, 5, 6, 7]" \
--confidence_threshold 0.3 \
--iou_threshold 0.7This demo integrates two main components, each with its own licensing:
-
ultralytics: The object detection model used in this demo, YOLOv8, is distributed under the AGPL-3.0 license. You can find more details about this license here.
-
supervision: The analytics code that powers the zone-based analysis in this demo is based on the Supervision library, which is licensed under the MIT license. This makes the Supervision part of the code fully open source and freely usable in your projects.