Skip to content

gstonnx: support neural network inference on video via ONNX

Aaron Boxer requested to merge boxerab/gst-plugins-bad:neuralnet into master

Screenshot_from_2021-04-02_12-19-33

Summary

This MR provides a transform element that leverages the ONNX runtime to run AI inference on a broad range of neural network hardware. ONNX supports 16 different providers at the moment, so with ONNX we immediately get support for Nvidia, AMD, Xilinx and many others.

For the first release, this plugin adds a gstonnxobjectdetector element to detect objects in video frames. Meta data generated by the model is attached to the video buffer as a GstVideoRegionOfInterestMeta meta. Meta data is also logged to the debug logger, so it is possible to verify the accuracy of the model.

Build

ONNX

Note:

  1. $SRC_DIR and $BUILD_DIR are local source and build directories
  2. To run with CUDA, both CUDA and cuDNN libraries must be installed. $CUDA_PATH is an environment variable set to the CUDA root path. On Linux, it would be /usr/local/cuda-XX.X where XX.X is the installed version of CUDA.
  3. To run with Xilinx FPGA:
  • set up host and Vitis docker image as described here
  • launch docker
$ cd $SRC_DIR
$ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd $BUILD_DIR/onnxruntime
  1. CPU
$ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake && make -j8 && sudo make install
  1. GPU
cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON -Donnxruntime_CUDA_HOME=$CUDA_PATH -Donnxruntime_CUDNN_HOME=$CUDA_PATH $SRC_DIR/onnxruntime/cmake && make -j8 && sudo make install
  1. Xilinx FPGA
 cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_VITISAI=ON ../onnxruntime/cmake && make -j48 && sudo make install

Test

Test image file, model file (YOLO model) and label file can be found here. Note: you will need Git LFS support to clone the model files.

Sample test pipeline:

GST_DEBUG=onnxobjectdetector:5 gst-launch-1.0 multifilesrc \
location=000000088462.jpg caps=image/jpeg,framerate=\(fraction\)30/1 ! jpegdec ! \
videoconvert ! \
onnxobjectdetector \
box-node-index=0 \
class-node-index=1 \
score-node-index=2 \
detection-node-index=3 \
execution-provider=cpu \
model-file=model.onnx \
label-file=COCO_classes.txt  !  \
videoconvert ! \
autovideosink
Edited by Aaron Boxer

Merge request reports