Struct anira::OnnxRuntimeProcessor::Instance

struct Instance

Collaboration diagram for anira::OnnxRuntimeProcessor::Instance:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "6" [label="anira::MemoryBlock< float >" tooltip="anira::MemoryBlock< float >"]
    "2" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "4" [label="anira::ModelData" tooltip="anira::ModelData"]
    "1" [label="anira::OnnxRuntimeProcessor::Instance" tooltip="anira::OnnxRuntimeProcessor::Instance" fillcolor="#BFBFBF"]
    "3" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "5" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "2" -> "4" [dir=forward tooltip="usage"]
    "2" -> "5" [dir=forward tooltip="usage"]
    "1" -> "2" [dir=forward tooltip="usage"]
    "1" -> "6" [dir=forward tooltip="usage"]
}

Internal processing instance for thread-safe ONNX Runtime operations.

Each Instance represents an independent ONNX Runtime processing context with its own session, tensors, and memory allocation. This design enables parallel processing without shared state or synchronization overhead.

Thread Safety:

Each instance is used by only one thread at a time, eliminating the need for locks during inference operations. The atomic processing flag ensures safe instance allocation across threads.

Public Functions

Instance(InferenceConfig &inference_config)

Constructs an ONNX Runtime processing instance.

Parameters:

inference_config – Reference to inference configuration

~Instance()

Destructor that cleans up ONNX Runtime resources for this instance.

void prepare()

Prepares this instance for inference operations.

Loads the ONNX model, creates session, allocates tensors, and performs initialization.

void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session)

Processes input through this instance’s ONNX Runtime session.

Parameters:
  • input – Input buffers to process

  • output – Output buffers to fill with results

  • session – Session element for context (unused in instance)

Public Members

Ort::MemoryInfo m_memory_info

Memory information for tensor allocation.

Ort::Env m_env

ONNX Runtime environment.

Ort::AllocatorWithDefaultOptions m_ort_alloc

Default allocator for ONNX Runtime.

Ort::SessionOptions m_session_options

Session configuration options.

std::unique_ptr<Ort::Session> m_session

ONNX Runtime inference session.

std::vector<MemoryBlock<float>> m_input_data

Pre-allocated input data buffers.

std::vector<Ort::Value> m_inputs

ONNX Runtime input tensors.

std::vector<Ort::Value> m_outputs

ONNX Runtime output tensors.

std::vector<Ort::AllocatedStringPtr> m_input_name

Input tensor names (allocated strings)

std::vector<Ort::AllocatedStringPtr> m_output_name

Output tensor names (allocated strings)

std::vector<const char*> m_output_names

Output tensor name pointers for API calls.

std::vector<const char*> m_input_names

Input tensor name pointers for API calls.

InferenceConfig &m_inference_config

Reference to inference configuration.

std::atomic<bool> m_processing = {false}

Flag indicating if instance is currently processing.

MemoryBlock<float> *__doxygen_force_0

Placeholder for Doxygen documentation.