Class anira::LibtorchProcessor¶
-
class LibtorchProcessor : public anira::BackendBase¶
Inheritance diagram for anira::LibtorchProcessor:
Collaboration diagram for anira::LibtorchProcessor:
LibTorch-based neural network inference processor.
The LibtorchProcessor class provides neural network inference capabilities using Facebook’s PyTorch C++ API (LibTorch). It supports loading TorchScript models and performing real-time inference with parallel processing capabilities.
Warning
This class is only available when compiled with USE_LIBTORCH defined
Public Functions
-
LibtorchProcessor(InferenceConfig &inference_config)¶
Constructs a LibTorch processor with the given inference configuration.
Initializes the LibTorch processor and creates the necessary number of parallel processing instances based on the configuration’s num_parallel_processors setting.
- Model Loading:
The constructor attempts to load the TorchScript model specified in the configuration. If a model function is specified, it will be used; otherwise, the default forward method is called.
- Parameters:
inference_config – Reference to inference configuration containing model path, tensor shapes, and processing parameters
-
~LibtorchProcessor() override¶
Destructor that properly cleans up LibTorch resources.
Ensures proper cleanup of all LibTorch modules, tensors, and allocated memory. All processing instances are safely destroyed.
-
virtual void prepare() override¶
Prepares all LibTorch instances for inference operations.
Loads the TorchScript model into all parallel processing instances, allocates input/output tensors, and performs warm-up inferences if specified in the configuration.
Processes input buffers through the LibTorch model.
Performs neural network inference using LibTorch, converting audio buffers to PyTorch tensors, executing the model, and converting results back to audio buffers.
- Parameters:
input – Vector of input buffers containing audio samples or parameter data
output – Vector of output buffers to receive processed results
session – Shared pointer to session element providing thread-safe instance access
-
LibtorchProcessor(InferenceConfig &inference_config)¶