Struct anira::TFLiteProcessor::Instance

struct Instance

Collaboration diagram for anira::TFLiteProcessor::Instance:

digraph {
    graph [bgcolor="#00000000"]
    node [shape=rectangle style=filled fillcolor="#FFFFFF" font=Helvetica padding=2]
    edge [color="#1414CE"]
    "6" [label="anira::MemoryBlock< float >" tooltip="anira::MemoryBlock< float >"]
    "2" [label="anira::InferenceConfig" tooltip="anira::InferenceConfig"]
    "4" [label="anira::ModelData" tooltip="anira::ModelData"]
    "3" [label="anira::ProcessingSpec" tooltip="anira::ProcessingSpec"]
    "1" [label="anira::TFLiteProcessor::Instance" tooltip="anira::TFLiteProcessor::Instance" fillcolor="#BFBFBF"]
    "5" [label="anira::TensorShape" tooltip="anira::TensorShape"]
    "2" -> "3" [dir=forward tooltip="usage"]
    "2" -> "4" [dir=forward tooltip="usage"]
    "2" -> "5" [dir=forward tooltip="usage"]
    "1" -> "2" [dir=forward tooltip="usage"]
    "1" -> "6" [dir=forward tooltip="usage"]
}

Internal processing instance for thread-safe TensorFlow Lite operations.

Each Instance represents an independent TensorFlow Lite processing context with its own model, interpreter, tensors, and memory allocation. This design enables parallel processing without shared state or synchronization overhead.

See also

TFLiteProcessor

Thread Safety:

Each instance is used by only one thread at a time, eliminating the need for locks during inference operations. The atomic processing flag ensures safe instance allocation across threads.

Public Functions

Instance(InferenceConfig &inference_config)

Constructs a TensorFlow Lite processing instance.

Parameters:

inference_config – Reference to inference configuration

~Instance()

Destructor that cleans up TensorFlow Lite resources for this instance.

void prepare()

Prepares this instance for inference operations.

Loads the TensorFlow Lite model, creates interpreter, allocates tensors, and performs initialization.

void process(std::vector<BufferF> &input, std::vector<BufferF> &output, std::shared_ptr<SessionElement> session)

Processes input through this instance’s TensorFlow Lite interpreter.

Parameters:
  • input – Input buffers to process

  • output – Output buffers to fill with results

  • session – Session element for context (unused in instance)

Public Members

TfLiteModel *m_model

TensorFlow Lite model loaded from file.

TfLiteInterpreterOptions *m_options

Interpreter configuration options.

TfLiteInterpreter *m_interpreter

TensorFlow Lite interpreter instance.

std::vector<MemoryBlock<float>> m_input_data

Pre-allocated input data buffers.

std::vector<TfLiteTensor*> m_inputs

TensorFlow Lite input tensors.

std::vector<const TfLiteTensor*> m_outputs

TensorFlow Lite output tensors.

InferenceConfig &m_inference_config

Reference to inference configuration.

std::atomic<bool> m_processing = {false}

Flag indicating if instance is currently processing.

MemoryBlock<float> *__doxygen_force_0

Placeholder for Doxygen documentation.