Class anira::TFLiteProcessor

class TFLiteProcessor : public anira::BackendBase

Inheritance diagram for anira::TFLiteProcessor:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "2" [label="anira::BackendBase" tooltip="anira::BackendBase"]
    "1" [label="anira::TFLiteProcessor" tooltip="anira::TFLiteProcessor" fillcolor="#BFBFBF"]
    "1" -> "2" [dir=forward tooltip="public-inheritance"]
}

Collaboration diagram for anira::TFLiteProcessor:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "8" [label="anira::MemoryBlock< float >" tooltip="anira::MemoryBlock< float >"]
    "2" [label="anira::BackendBase" tooltip="anira::BackendBase"]
    "3" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "5" [label="anira::ModelData" tooltip="anira::ModelData"]
    "4" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "1" [label="anira::TFLiteProcessor" tooltip="anira::TFLiteProcessor" fillcolor="#BFBFBF"]
    "7" [label="anira::TFLiteProcessor::Instance" tooltip="anira::TFLiteProcessor::Instance"]
    "6" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "3" -> "4" [dir=forward tooltip="usage"]
    "3" -> "5" [dir=forward tooltip="usage"]
    "3" -> "6" [dir=forward tooltip="usage"]
    "1" -> "2" [dir=forward tooltip="public-inheritance"]
    "1" -> "7" [dir=forward tooltip="usage"]
    "7" -> "3" [dir=forward tooltip="usage"]
    "7" -> "8" [dir=forward tooltip="usage"]
}

TensorFlow Lite-based neural network inference processor.

The TFLiteProcessor class provides neural network inference capabilities using Google’s TensorFlow Lite C API. It offers lightweight, efficient inference optimized for mobile and embedded devices with parallel processing support.

Warning

This class is only available when compiled with USE_TFLITE defined

Public Functions

TFLiteProcessor(InferenceConfig &inference_config)

Constructs a TensorFlow Lite processor with the given inference configuration.

Initializes the TensorFlow Lite processor and creates the necessary number of parallel processing instances based on the configuration’s num_parallel_processors setting.

Parameters:

inference_config – Reference to inference configuration containing model path, tensor shapes, and processing parameters

~TFLiteProcessor() override

Destructor that properly cleans up TensorFlow Lite resources.

Ensures proper cleanup of all TensorFlow Lite interpreters, models, and allocated memory. All processing instances are safely destroyed with proper resource deallocation.

virtual void prepare() override

Prepares all TensorFlow Lite instances for inference operations.

Loads the TensorFlow Lite model into all parallel processing instances, allocates input/output tensors, and performs warm-up inferences if specified in the configuration.

virtual void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session) override

Processes input buffers through the TensorFlow Lite model.

Performs neural network inference using TensorFlow Lite, converting audio buffers to TensorFlow Lite tensors, executing the model, and converting results back to audio buffers.

Parameters:
  • input – Vector of input buffers containing audio samples or parameter data

  • output – Vector of output buffers to receive processed results

  • session – Shared pointer to session element providing thread-safe instance access