Class anira::BackendBase¶
-
class BackendBase¶
Inheritance diagram for anira::BackendBase:
Collaboration diagram for anira::BackendBase:
Abstract base class for all neural network inference backends.
The BackendBase class defines the common interface and provides basic functionality for all inference backend implementations. It serves as the foundation for specific backend implementations such as LibTorch, ONNX Runtime, and TensorFlow Lite processors.
Subclassed by anira::LibtorchProcessor, anira::OnnxRuntimeProcessor, anira::TFLiteProcessor
Public Functions
-
BackendBase(InferenceConfig &inference_config)¶
Constructs a BackendBase with the given inference configuration.
Initializes the backend processor with a reference to the inference configuration that contains all necessary parameters for model loading and processing.
- Parameters:
inference_config – Reference to the inference configuration containing model data, tensor shapes, and processing specifications
-
virtual ~BackendBase() = default¶
Virtual destructor for proper cleanup of derived classes.
-
virtual void prepare()¶
Prepares the backend for inference operations.
This method is called during initialization to set up the inference backend. The base implementation is empty, but derived classes should override this to perform backend-specific initialization such as:
Loading neural network models
Allocating memory for tensors
Configuring inference sessions
Performing warm-up inferences
Note
This method should be called before any process() calls
Note
Thread-safe: This method should only be called during initialization
Processes input buffers through the neural network model.
Performs inference on the provided input buffers and writes results to output buffers. The base implementation provides a simple pass-through that copies input to output when buffer dimensions match, otherwise clears the output.
- Thread Safety:
This method is designed to be called from real-time audio threads and should be lock-free and deterministic in execution time.
Note
Derived classes should override this method to implement actual inference
Warning
The session parameter must be valid when using multi-threaded processing
- Parameters:
input – Vector of input buffers containing audio or other data to process
output – Vector of output buffers to write the processed results
session – Shared pointer to session element for thread-safe processing context
Public Members
-
InferenceConfig &m_inference_config¶
Reference to inference configuration containing model and processing parameters.
-
BackendBase(InferenceConfig &inference_config)¶