Struct anira::OnnxRuntimeProcessor::Instance¶
-
struct Instance¶
Collaboration diagram for anira::OnnxRuntimeProcessor::Instance:
Internal processing instance for thread-safe ONNX Runtime operations.
Each Instance represents an independent ONNX Runtime processing context with its own session, tensors, and memory allocation. This design enables parallel processing without shared state or synchronization overhead.
See also
- Thread Safety:
Each instance is used by only one thread at a time, eliminating the need for locks during inference operations. The atomic processing flag ensures safe instance allocation across threads.
Public Functions
-
Instance(InferenceConfig &inference_config)¶
Constructs an ONNX Runtime processing instance.
- Parameters:
inference_config – Reference to inference configuration
-
~Instance()¶
Destructor that cleans up ONNX Runtime resources for this instance.
-
void prepare()¶
Prepares this instance for inference operations.
Loads the ONNX model, creates session, allocates tensors, and performs initialization.
Processes input through this instance’s ONNX Runtime session.
- Parameters:
input – Input buffers to process
output – Output buffers to fill with results
session – Session element for context (unused in instance)
Public Members
-
Ort::MemoryInfo m_memory_info¶
Memory information for tensor allocation.
-
Ort::Env m_env¶
ONNX Runtime environment.
-
Ort::AllocatorWithDefaultOptions m_ort_alloc¶
Default allocator for ONNX Runtime.
-
Ort::SessionOptions m_session_options¶
Session configuration options.
-
std::unique_ptr<Ort::Session> m_session¶
ONNX Runtime inference session.
-
std::vector<MemoryBlock<float>> m_input_data¶
Pre-allocated input data buffers.
-
std::vector<Ort::Value> m_inputs¶
ONNX Runtime input tensors.
-
std::vector<Ort::Value> m_outputs¶
ONNX Runtime output tensors.
-
std::vector<Ort::AllocatedStringPtr> m_input_name¶
Input tensor names (allocated strings)
-
std::vector<Ort::AllocatedStringPtr> m_output_name¶
Output tensor names (allocated strings)
-
std::vector<const char*> m_output_names¶
Output tensor name pointers for API calls.
-
std::vector<const char*> m_input_names¶
Input tensor name pointers for API calls.
-
InferenceConfig &m_inference_config¶
Reference to inference configuration.
-
std::atomic<bool> m_processing = {false}¶
Flag indicating if instance is currently processing.
-
MemoryBlock<float> *__doxygen_force_0¶
Placeholder for Doxygen documentation.