Struct anira::LibtorchProcessor::Instance

struct Instance

Collaboration diagram for anira::LibtorchProcessor::Instance:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "6" [label="anira::MemoryBlock< float >" tooltip="anira::MemoryBlock< float >"]
    "2" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "1" [label="anira::LibtorchProcessor::Instance" tooltip="anira::LibtorchProcessor::Instance" fillcolor="#BFBFBF"]
    "4" [label="anira::ModelData" tooltip="anira::ModelData"]
    "3" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "5" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "2" -> "4" [dir=forward tooltip="usage"]
    "2" -> "5" [dir=forward tooltip="usage"]
    "1" -> "2" [dir=forward tooltip="usage"]
    "1" -> "6" [dir=forward tooltip="usage"]
}

Internal processing instance for thread-safe LibTorch operations.

Each Instance represents an independent LibTorch processing context with its own model, tensors, and memory allocation. This design enables parallel processing without shared state or synchronization overhead.

Thread Safety:

Each instance is used by only one thread at a time, eliminating the need for locks during inference operations. The atomic processing flag ensures safe instance allocation across threads.

Public Functions

Instance(InferenceConfig &inference_config)

Constructs a LibTorch processing instance.

Parameters:

inference_config – Reference to inference configuration

void prepare()

Prepares this instance for inference operations.

Loads the TorchScript model, allocates tensors, and performs initialization.

void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session)

Processes input through this instance’s model.

Parameters:
  • input – Input buffers to process

  • output – Output buffers to fill with results

  • session – Session element for context (unused in instance)

Public Members

torch::jit::script::Module m_module

Loaded TorchScript model for inference.

std::vector<MemoryBlock<float>> m_input_data

Pre-allocated input data buffers.

std::vector<c10::IValue> m_inputs

PyTorch input tensor values.

c10::IValue m_outputs

PyTorch output tensor values.

InferenceConfig &m_inference_config

Reference to inference configuration.

std::atomic<bool> m_processing = {false}

Flag indicating if instance is currently processing.

MemoryBlock<float> *__doxygen_force_0

Placeholder for Doxygen documentation.