Class anira::PrePostProcessor¶
-
class PrePostProcessor¶
Collaboration diagram for anira::PrePostProcessor:
Abstract base class for preprocessing and postprocessing data for neural network inference.
The PrePostProcessor class handles the transformation of data between the host application and neural network inference engines. It provides default implementations for common use cases and serves as a base class for custom preprocessing implementations.
The class supports two types of tensor data:
Streamable tensors: Time-varying signals that flow continuously through ring buffers
Non-streamable tensors: Static parameters or control values stored in thread-safe internal storage
See also
- Key Features:
Thread-safe handling of non-streamable tensor data using atomic operations
Helper methods for efficient buffer manipulation
Support for multiple input/output tensors with different characteristics
Real-time safe operations suitable for processing
- Usage:
For models that operate in the time domain with simple input/output shapes, the default implementation can be used directly. For custom preprocessing requirements (frequency domain transforms, custom windowing, multi-tensor operations), inherit from this class and override the pre_process() and post_process() methods.
Warning
All methods are designed to be real-time safe and should not perform memory allocation or other blocking operations when called from the audio thread.
Public Functions
-
PrePostProcessor() = delete¶
Default constructor is deleted to prevent uninitialized instances.
-
PrePostProcessor(InferenceConfig &inference_config)¶
Constructs a PrePostProcessor with the given inference configuration.
Initializes internal storage for non-streamable tensors based on the configuration. Streamable tensors (those with preprocess_input_size > 0 or postprocess_output_size > 0) do not require internal storage as they use ring buffers directly.
- Parameters:
inference_config – Reference to the inference configuration containing tensor specifications
-
virtual ~PrePostProcessor() = default¶
Default destructor.
-
virtual void pre_process(std::vector<RingBuffer> &input, std::vector<BufferF> &output, InferenceBackend current_inference_backend)¶
Transforms input data from ring buffers to inference tensors.
This method is called before neural network inference to prepare input data. For streamable tensors, it extracts samples from ring buffers. For non-streamable tensors, it retrieves values from internal storage.
See also
Note
This method is called from the audio thread and must be real-time safe
- Parameters:
input – Vector of input ring buffers containing data from the host application
output – Vector of output tensors that will be fed to the inference engine
current_inference_backend – Currently active inference backend (for backend-specific processing)
-
virtual void post_process(std::vector<BufferF> &input, std::vector<RingBuffer> &output, InferenceBackend current_inference_backend)¶
Transforms inference results to output ring buffers.
This method is called after neural network inference to process the results. For streamable tensors, it pushes samples to ring buffers. For non-streamable tensors, it stores values in internal storage.
See also
Note
This method is called from the audio thread and must be real-time safe
- Parameters:
input – Vector of input tensors containing inference results
output – Vector of output ring buffers that will be read by the host application
current_inference_backend – Currently active inference backend (for backend-specific processing)
-
void set_input(const float &input, size_t i, size_t j)¶
Sets a non-streamable input value in thread-safe storage.
Used to store control parameters or static values that don’t change sample-by-sample. The data is stored using atomic operations for thread safety.
See also
Warning
Only use for tensors where preprocess_input_size == 0
- Parameters:
input – The value to store
i – Tensor index (which input tensor)
j – Sample index within the tensor
-
void set_output(const float &output, size_t i, size_t j)¶
Sets a non-streamable output value in thread-safe storage.
Used to store control parameters or static values from inference results. The data is stored using atomic operations for thread safety.
See also
Warning
Only use for tensors where postprocess_output_size == 0
- Parameters:
output – The value to store
i – Tensor index (which output tensor)
j – Sample index within the tensor
-
float get_input(size_t i, size_t j)¶
Retrieves a non-streamable input value from thread-safe storage.
Used to read control parameters or static values in a thread-safe manner.
See also
Warning
Only use for tensors where preprocess_input_size == 0
- Parameters:
i – Tensor index (which input tensor)
j – Sample index within the tensor
- Returns:
The stored input value
-
float get_output(size_t i, size_t j)¶
Retrieves a non-streamable output value from thread-safe storage.
Used to read inference results or control parameters in a thread-safe manner.
See also
Warning
Only use for tensors where postprocess_output_size == 0
- Parameters:
i – Tensor index (which output tensor)
j – Sample index within the tensor
- Returns:
The stored output value
-
void pop_samples_from_buffer(RingBuffer &input, BufferF &output, size_t num_samples)¶
Extracts samples from a ring buffer to an output tensor.
Pops the specified number of samples from the ring buffer and writes them to the output tensor. For multi-channel inputs, samples are interleaved in the output buffer (channel 0 samples first, then channel 1, etc.).
Note
Real-time safe operation
- Parameters:
input – Source ring buffer
output – Destination tensor buffer
num_samples – Number of samples to extract per channel
-
void pop_samples_from_buffer(RingBuffer &input, BufferF &output, size_t num_new_samples, size_t num_old_samples)¶
Extracts samples with overlapping windows from a ring buffer.
Combines new samples with previously extracted samples to create overlapping windows. This is useful for models that require context from previous inference steps.
Note
Real-time safe operation
- Parameters:
input – Source ring buffer
output – Destination tensor buffer
num_new_samples – Number of new samples to extract per channel
num_old_samples – Number of samples to retain from previous extraction
-
void pop_samples_from_buffer(RingBuffer &input, BufferF &output, size_t num_new_samples, size_t num_old_samples, size_t offset)¶
Extracts samples with overlapping windows and offset.
Advanced version that allows specifying an offset in the output buffer where the extracted samples should be written. Useful for batched processing or complex tensor layouts.
Note
Real-time safe operation
- Parameters:
input – Source ring buffer
output – Destination tensor buffer
num_new_samples – Number of new samples to extract per channel
num_old_samples – Number of samples to retain from previous extraction
offset – Starting position in the output buffer for writing samples
-
void push_samples_to_buffer(const BufferF &input, RingBuffer &output, size_t num_samples)¶
Writes samples from a tensor to a ring buffer.
Pushes samples from the input tensor to the ring buffer. For multi-channel outputs, assumes samples are interleaved in the input buffer (channel 0 samples first, then channel 1, etc.).
Note
Real-time safe operation
- Parameters:
input – Source tensor buffer
output – Destination ring buffer
num_samples – Number of samples to write per channel