-
Notifications
You must be signed in to change notification settings - Fork 954
Description
Incorporation of ONNX Runtime:
Utilizing the ONNX Runtime can facilitate the deployment of machine learning models across various platforms, enhancing our system's flexibility and performance.
https://onnxruntime.ai/docs/install/
Adoption of libonnx: Employing libonnx, a lightweight, portable C99 ONNX inference engine, can optimize our operations on embedded devices, especially those with hardware acceleration support.
https://github.com/xboot/libonnx
Why this is important:
AI/ML models can be develop using various technologies and easy to integrate with nDPI without conversions and can run top of ONNX
@IvanNardi This is as per our initial discussion, lets discuss in more detail and fine tune the idea to work with more portable and modular manner