Error Description
I am trying to run a custom segmentation model on pipeless that was converted to onnx file to segment out the objects and then draw bounding box on the binary masks produced as output of the model. I have written the pre-processing and post-processing python scripts. They seem fine and I have tried them on a single image independently using the onnx model and output is correct. So when I run inference on the pipeless I get the following error: I'm using the rtsp streaming protocol for the live stream. I have provided a rtsp stream using the following command:
pipeless add stream --input-uri "rtsp://<>" --output-uri "screen" --frame-path "Inference"
The following error occurs:
pipeless start --stages-dir .
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] ⚙️ Loading stages from .
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] ⏳ Loading stage 'Inference' from ./Inference
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] Loading hook from ./Inference/pre-process.py
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] Creating stateless hook for Inference-pre_process
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] Loading hook from ./Inference/post-process.py
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] Creating stateless hook for Inference-post_process
[2024-07-18T09:44:01Z INFO pipeless_ai::stages::parser] Loading hook from ./Inference/process.json
[2024-07-18T09:44:01Z INFO tracing::span] apply_execution_providers;
[2024-07-18T09:44:01Z INFO ort::execution_providers] Successfully registered `CPUExecutionProvider`
[2024-07-18T09:44:02Z INFO ort::session] drop; self=SessionBuilder { env: "Inference", allocator: Device, memory_type: Default }
[2024-07-18T09:44:02Z WARN pipeless_ai::stages::inference::onnx] Could not run an inference test because the model input shape was not properly recognized. Obtained: width: "640", height: "360", channels: "None"
[2024-07-18T09:44:02Z INFO pipeless_ai::stages::parser] Creating stateless hook for Inference-process
[2024-07-18T09:44:02Z INFO pipeless_ai::config::adapters::rest] REST adapter running
[2024-07-18T09:44:02Z INFO warp::server] Server::run; addr=0.0.0.0:3030
[2024-07-18T09:44:02Z INFO warp::server] listening on http://0.0.0.0:3030
[2024-07-18T09:44:04Z WARN pipeless_ai::config::adapters::rest] Restart policy not specified for stream, defaulting to 'never'
[2024-07-18T09:44:04Z INFO pipeless_ai::dispatcher] New stream entry detected, creating pipeline
[2024-07-18T09:44:06Z INFO pipeless_ai::input::pipeline] Using SystemMemory
[2024-07-18T09:44:06Z ERROR pipeless_ai::input::pipeline] Unable to link new uridecodebin pad to videoconvert sink pad
[2024-07-18T09:44:06Z INFO pipeless_ai::input::pipeline] Using SystemMemory
[2024-07-18T09:44:06Z INFO pipeless_ai::input::pipeline] New tags for input gst pipeline with id 7a0756d4-123f-4a38-a545-ad2c6440b6ce. Tags: taglist, video-codec=(string)"H.264\ \(High\ Profile\)";
[2024-07-18T09:44:06Z INFO pipeless_ai::pipeline] Tags updated to: taglist, video-codec=(string)"H.264\ \(High\ Profile\)";
[2024-07-18T09:44:06Z INFO pipeless_ai::input::pipeline] Dynamic source pad video_0 caps: video/x-raw, format=(string)I420, width=(int)1920, height=(int)1080, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)25/1
[2024-07-18T09:44:06Z INFO pipeless_ai::pipeline] New input caps. Creating output pipeline for caps: video/x-raw, format=(string)I420, width=(int)1920, height=(int)1080, interlace-mode=(string)progressive, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)bt709, framerate=(fraction)25/1
error: XDG_RUNTIME_DIR not set in the environment.
error: XDG_RUNTIME_DIR not set in the environment.
[2024-07-18T09:44:07Z WARN pipeless_ai::output::pipeline] Warning in output gst pipeline from element element sink.
Pipeline id: 7a0756d4-123f-4a38-a545-ad2c6440b6ce. Warning: Could not initialise Xv output
[2024-07-18T09:44:08Z WARN pipeless_ai::stages::inference::onnx] No inference input data was provided. Did you forget to add it at your pre-process hook?
[2024-07-18T09:44:08Z ERROR pipeless_ai::stages::languages::python] Error executing hook: NameError: name 'output' is not defined
[2024-07-18T09:44:08Z WARN pipeless_ai::pipeline] No frame returned from path execution, skipping frame forwarding to the output (if any).
The pre and post processing scripts I have written
Pre-Processing:
import cv2
import numpy as np
def hook(frame_data, _):
frame = frame_data["original"].view()
orig_h, orig_w, _ = frame.shape
frame = cv2.resize(frame, (640, 360))
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
normalized_frame = frame / 128.0 - 1
input_frame = np.expand_dims(normalized_frame, axis=0) # Add batch dimension
input_frame = input_frame.astype(np.float32) # Convert to float32
frame_data['input'] = input_frame
Post-Processing:
import cv2 as cv
import numpy as np
def get_mini_boxes(contour, binary):
bounding_box = cv.minAreaRect(contour)
points = sorted(list(cv.boxPoints(bounding_box)), key=lambda x: x[0])
index_1, index_2, index_3, index_4 = 0, 1, 2, 3
if points[1][1] > points[0][1]:
index_1 = 0
index_4 = 1
else:
index_1 = 1
index_4 = 0
if points[3][1] > points[2][1]:
index_2 = 2
index_3 = 3
else:
index_2 = 3
index_3 = 2
box = [points[index_1], points[index_2], points[index_3], points[index_4]]
score = box_score_fast(binary, box)
return np.array(box), score
def box_score_fast(bitmap, _box):
h, w = bitmap.shape[:2]
box = _box.copy()
box = np.array(box)
xmin = np.clip(np.floor(box[:, 0].min()).astype(int), 0, w - 1)
xmax = np.clip(np.ceil(box[:, 0].max()).astype(int), 0, w - 1)
ymin = np.clip(np.floor(box[:, 1].min()).astype(int), 0, h - 1)
ymax = np.clip(np.ceil(box[:, 1].max()).astype(int), 0, h - 1)
mask = np.zeros((ymax - ymin + 1, xmax - xmin + 1), dtype=np.uint8)
box[:, 0] = box[:, 0] - xmin
box[:, 1] = box[:, 1] - ymin
cv.fillPoly(mask, [box.astype(np.int32)], 1)
return cv.mean(bitmap[ymin:ymax + 1, xmin:xmax + 1], mask)[0]
def hook(frame_data, _):
frame = frame_data['original']
mask = frame_data.get('output', [])
threshold_value = 0.3 # Adjust this value as needed
mask = cv.threshold(output[0], threshold_value, 1, cv.THRESH_BINARY)[1] # Single mask
if len(mask) > 0:
h, w = mask.shape
Error Description
I am trying to run a custom segmentation model on pipeless that was converted to onnx file to segment out the objects and then draw bounding box on the binary masks produced as output of the model. I have written the pre-processing and post-processing python scripts. They seem fine and I have tried them on a single image independently using the onnx model and output is correct. So when I run inference on the pipeless I get the following error: I'm using the rtsp streaming protocol for the live stream. I have provided a rtsp stream using the following command:
pipeless add stream --input-uri "rtsp://<>" --output-uri "screen" --frame-path "Inference"
The following error occurs:
The pre and post processing scripts I have written
Pre-Processing:
Post-Processing: