Struct anira::LibtorchProcessor::Instance¶
-
struct Instance¶
Collaboration diagram for anira::LibtorchProcessor::Instance:
Internal processing instance for thread-safe LibTorch operations.
Each Instance represents an independent LibTorch processing context with its own model, tensors, and memory allocation. This design enables parallel processing without shared state or synchronization overhead.
See also
- Thread Safety:
Each instance is used by only one thread at a time, eliminating the need for locks during inference operations. The atomic processing flag ensures safe instance allocation across threads.
Public Functions
-
Instance(InferenceConfig &inference_config)¶
Constructs a LibTorch processing instance.
- Parameters:
inference_config – Reference to inference configuration
-
void prepare()¶
Prepares this instance for inference operations.
Loads the TorchScript model, allocates tensors, and performs initialization.
Processes input through this instance’s model.
- Parameters:
input – Input buffers to process
output – Output buffers to fill with results
session – Session element for context (unused in instance)
Public Members
-
torch::jit::script::Module m_module¶
Loaded TorchScript model for inference.
-
std::vector<MemoryBlock<float>> m_input_data¶
Pre-allocated input data buffers.
-
std::vector<c10::IValue> m_inputs¶
PyTorch input tensor values.
-
c10::IValue m_outputs¶
PyTorch output tensor values.
-
InferenceConfig &m_inference_config¶
Reference to inference configuration.
-
std::atomic<bool> m_processing = {false}¶
Flag indicating if instance is currently processing.
-
MemoryBlock<float> *__doxygen_force_0¶
Placeholder for Doxygen documentation.