Class anira::LibtorchProcessor

class LibtorchProcessor : public anira::BackendBase

Inheritance diagram for anira::LibtorchProcessor:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "2" [label="anira::BackendBase" tooltip="anira::BackendBase"]
    "1" [label="anira::LibtorchProcessor" tooltip="anira::LibtorchProcessor" fillcolor="#BFBFBF"]
    "1" -> "2" [dir=forward tooltip="public-inheritance"]
}

Collaboration diagram for anira::LibtorchProcessor:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "8" [label="anira::MemoryBlock< float >" tooltip="anira::MemoryBlock< float >"]
    "2" [label="anira::BackendBase" tooltip="anira::BackendBase"]
    "3" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "1" [label="anira::LibtorchProcessor" tooltip="anira::LibtorchProcessor" fillcolor="#BFBFBF"]
    "7" [label="anira::LibtorchProcessor::Instance" tooltip="anira::LibtorchProcessor::Instance"]
    "5" [label="anira::ModelData" tooltip="anira::ModelData"]
    "4" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "6" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "3" -> "4" [dir=forward tooltip="usage"]
    "3" -> "5" [dir=forward tooltip="usage"]
    "3" -> "6" [dir=forward tooltip="usage"]
    "1" -> "2" [dir=forward tooltip="public-inheritance"]
    "1" -> "7" [dir=forward tooltip="usage"]
    "7" -> "3" [dir=forward tooltip="usage"]
    "7" -> "8" [dir=forward tooltip="usage"]
}

LibTorch-based neural network inference processor.

The LibtorchProcessor class provides neural network inference capabilities using Facebook’s PyTorch C++ API (LibTorch). It supports loading TorchScript models and performing real-time inference with parallel processing capabilities.

Warning

This class is only available when compiled with USE_LIBTORCH defined

Public Functions

LibtorchProcessor(InferenceConfig &inference_config)

Constructs a LibTorch processor with the given inference configuration.

Initializes the LibTorch processor and creates the necessary number of parallel processing instances based on the configuration’s num_parallel_processors setting.

Model Loading:

The constructor attempts to load the TorchScript model specified in the configuration. If a model function is specified, it will be used; otherwise, the default forward method is called.

Parameters:

inference_config – Reference to inference configuration containing model path, tensor shapes, and processing parameters

~LibtorchProcessor() override

Destructor that properly cleans up LibTorch resources.

Ensures proper cleanup of all LibTorch modules, tensors, and allocated memory. All processing instances are safely destroyed.

virtual void prepare() override

Prepares all LibTorch instances for inference operations.

Loads the TorchScript model into all parallel processing instances, allocates input/output tensors, and performs warm-up inferences if specified in the configuration.

virtual void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session) override

Processes input buffers through the LibTorch model.

Performs neural network inference using LibTorch, converting audio buffers to PyTorch tensors, executing the model, and converting results back to audio buffers.

Parameters:
  • input – Vector of input buffers containing audio samples or parameter data

  • output – Vector of output buffers to receive processed results

  • session – Shared pointer to session element providing thread-safe instance access