Struct anira::TFLiteProcessor::Instance¶
-
struct Instance¶
Collaboration diagram for anira::TFLiteProcessor::Instance:
Internal processing instance for thread-safe TensorFlow Lite operations.
Each Instance represents an independent TensorFlow Lite processing context with its own model, interpreter, tensors, and memory allocation. This design enables parallel processing without shared state or synchronization overhead.
See also
- Thread Safety:
Each instance is used by only one thread at a time, eliminating the need for locks during inference operations. The atomic processing flag ensures safe instance allocation across threads.
Public Functions
-
Instance(InferenceConfig &inference_config)¶
Constructs a TensorFlow Lite processing instance.
- Parameters:
inference_config – Reference to inference configuration
-
~Instance()¶
Destructor that cleans up TensorFlow Lite resources for this instance.
-
void prepare()¶
Prepares this instance for inference operations.
Loads the TensorFlow Lite model, creates interpreter, allocates tensors, and performs initialization.
Processes input through this instance’s TensorFlow Lite interpreter.
- Parameters:
input – Input buffers to process
output – Output buffers to fill with results
session – Session element for context (unused in instance)
Public Members
-
TfLiteModel *m_model¶
TensorFlow Lite model loaded from file.
-
TfLiteInterpreterOptions *m_options¶
Interpreter configuration options.
-
TfLiteInterpreter *m_interpreter¶
TensorFlow Lite interpreter instance.
-
std::vector<MemoryBlock<float>> m_input_data¶
Pre-allocated input data buffers.
-
std::vector<TfLiteTensor*> m_inputs¶
TensorFlow Lite input tensors.
-
std::vector<const TfLiteTensor*> m_outputs¶
TensorFlow Lite output tensors.
-
InferenceConfig &m_inference_config¶
Reference to inference configuration.
-
std::atomic<bool> m_processing = {false}¶
Flag indicating if instance is currently processing.
-
MemoryBlock<float> *__doxygen_force_0¶
Placeholder for Doxygen documentation.