Class anira::BackendBase

class BackendBase

Inheritance diagram for anira::BackendBase:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "1" [label="anira::BackendBase" tooltip="anira::BackendBase" fillcolor="#BFBFBF"]
    "2" [label="anira::LibtorchProcessor" tooltip="anira::LibtorchProcessor"]
    "3" [label="anira::OnnxRuntimeProcessor" tooltip="anira::OnnxRuntimeProcessor"]
    "4" [label="anira::TFLiteProcessor" tooltip="anira::TFLiteProcessor"]
    "2" -> "1" [dir=forward tooltip="public-inheritance"]
    "3" -> "1" [dir=forward tooltip="public-inheritance"]
    "4" -> "1" [dir=forward tooltip="public-inheritance"]
}

Collaboration diagram for anira::BackendBase:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "1" [label="anira::BackendBase" tooltip="anira::BackendBase" fillcolor="#BFBFBF"]
    "2" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "4" [label="anira::ModelData" tooltip="anira::ModelData"]
    "3" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "5" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "1" -> "2" [dir=forward tooltip="usage"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "2" -> "4" [dir=forward tooltip="usage"]
    "2" -> "5" [dir=forward tooltip="usage"]
}

Abstract base class for all neural network inference backends.

The BackendBase class defines the common interface and provides basic functionality for all inference backend implementations. It serves as the foundation for specific backend implementations such as LibTorch, ONNX Runtime, and TensorFlow Lite processors.

Subclassed by anira::LibtorchProcessor, anira::OnnxRuntimeProcessor, anira::TFLiteProcessor

Public Functions

BackendBase(InferenceConfig &inference_config)

Constructs a BackendBase with the given inference configuration.

Initializes the backend processor with a reference to the inference configuration that contains all necessary parameters for model loading and processing.

Parameters:

inference_config – Reference to the inference configuration containing model data, tensor shapes, and processing specifications

virtual ~BackendBase() = default

Virtual destructor for proper cleanup of derived classes.

virtual void prepare()

Prepares the backend for inference operations.

This method is called during initialization to set up the inference backend. The base implementation is empty, but derived classes should override this to perform backend-specific initialization such as:

  • Loading neural network models

  • Allocating memory for tensors

  • Configuring inference sessions

  • Performing warm-up inferences

Note

This method should be called before any process() calls

Note

Thread-safe: This method should only be called during initialization

virtual void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session)

Processes input buffers through the neural network model.

Performs inference on the provided input buffers and writes results to output buffers. The base implementation provides a simple pass-through that copies input to output when buffer dimensions match, otherwise clears the output.

Thread Safety:

This method is designed to be called from real-time audio threads and should be lock-free and deterministic in execution time.

Note

Derived classes should override this method to implement actual inference

Warning

The session parameter must be valid when using multi-threaded processing

Parameters:
  • input – Vector of input buffers containing audio or other data to process

  • output – Vector of output buffers to write the processed results

  • session – Shared pointer to session element for thread-safe processing context

Public Members

InferenceConfig &m_inference_config

Reference to inference configuration containing model and processing parameters.