Class anira::OnnxRuntimeProcessor

class OnnxRuntimeProcessor : public anira::BackendBase

Inheritance diagram for anira::OnnxRuntimeProcessor:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "2" [label="anira::BackendBase" tooltip="anira::BackendBase"]
    "1" [label="anira::OnnxRuntimeProcessor" tooltip="anira::OnnxRuntimeProcessor" fillcolor="#BFBFBF"]
    "1" -> "2" [dir=forward tooltip="public-inheritance"]
}

Collaboration diagram for anira::OnnxRuntimeProcessor:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "8" [label="anira::MemoryBlock< float >" tooltip="anira::MemoryBlock< float >"]
    "2" [label="anira::BackendBase" tooltip="anira::BackendBase"]
    "3" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "5" [label="anira::ModelData" tooltip="anira::ModelData"]
    "1" [label="anira::OnnxRuntimeProcessor" tooltip="anira::OnnxRuntimeProcessor" fillcolor="#BFBFBF"]
    "7" [label="anira::OnnxRuntimeProcessor::Instance" tooltip="anira::OnnxRuntimeProcessor::Instance"]
    "4" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "6" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "3" -> "4" [dir=forward tooltip="usage"]
    "3" -> "5" [dir=forward tooltip="usage"]
    "3" -> "6" [dir=forward tooltip="usage"]
    "1" -> "2" [dir=forward tooltip="public-inheritance"]
    "1" -> "7" [dir=forward tooltip="usage"]
    "7" -> "3" [dir=forward tooltip="usage"]
    "7" -> "8" [dir=forward tooltip="usage"]
}

ONNX Runtime-based neural network inference processor.

The OnnxRuntimeProcessor class provides neural network inference capabilities using Microsoft’s ONNX Runtime. It supports loading ONNX models and performing real-time inference with optimized execution providers and parallel processing.

Warning

This class is only available when compiled with USE_ONNXRUNTIME defined

Public Functions

OnnxRuntimeProcessor(InferenceConfig &inference_config)

Constructs an ONNX Runtime processor with the given inference configuration.

Initializes the ONNX Runtime processor and creates the necessary number of parallel processing instances based on the configuration’s num_parallel_processors setting.

Parameters:

inference_config – Reference to inference configuration containing model path, tensor shapes, and processing parameters

~OnnxRuntimeProcessor() override

Destructor that properly cleans up ONNX Runtime resources.

Ensures proper cleanup of all ONNX Runtime sessions, tensors, and allocated memory. All processing instances are safely destroyed with proper resource deallocation.

virtual void prepare() override

Prepares all ONNX Runtime instances for inference operations.

Loads the ONNX model into all parallel processing instances, allocates input/output tensors, and performs warm-up inferences if specified in the configuration.

virtual void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session) override

Processes input buffers through the ONNX Runtime model.

Performs neural network inference using ONNX Runtime, converting audio buffers to ONNX tensors, executing the model, and converting results back to audio buffers.

Parameters:
  • input – Vector of input buffers containing audio samples or parameter data

  • output – Vector of output buffers to receive processed results

  • session – Shared pointer to session element providing thread-safe instance access