Struct anira::ProcessingSpec

struct ProcessingSpec

Specification for preprocessing and postprocessing parameters.

The ProcessingSpec struct defines the processing pipeline configuration for transforming data between the host application and neural network inference.

Streamable vs Non-Streamable Tensors:

  • Streamable tensors: Time-varying data (e.g., audio) that flows continuously

    • Have non-zero preprocess_input_size and postprocess_output_size

    • Managed through ring buffers for real-time processing

  • Non-streamable tensors: Static parameters or control values

    • Have zero preprocess_input_size or postprocess_output_size

    • Stored in thread-safe internal storage

Public Functions

ProcessingSpec() = default

Default constructor creating an empty processing specification.

inline ProcessingSpec(std::vector<size_t> preprocess_input_channels, std::vector<size_t> preprocess_output_channels, std::vector<size_t> preprocess_input_size, std::vector<size_t> postprocess_output_size, std::vector<size_t> internal_model_latency)

Constructs a complete ProcessingSpec with all parameters.

Parameters:
  • preprocess_input_channels – Number of input channels for each input tensor

  • preprocess_output_channels – Number of output channels for each output tensor

  • preprocess_input_size – Samples count required for preprocessing for each input tensor (0 = non-streamable)

  • postprocess_output_size – Samples count after the postprocessing for each output tensor (0 = non-streamable)

  • internal_model_latency – Internal model latency in samples for each output tensor

inline ProcessingSpec(std::vector<size_t> preprocess_input_channels, std::vector<size_t> preprocess_output_channels)

Constructs a minimal ProcessingSpec with only channel information.

Creates a processing specification with only input and output channel counts. Other parameters are left empty and will be computed automatically by InferenceConfig.

Parameters:
  • preprocess_input_channels – Number of input channels for each input tensor

  • preprocess_output_channels – Number of output channels for each output tensor

inline ProcessingSpec(std::vector<size_t> preprocess_input_channels, std::vector<size_t> preprocess_output_channels, std::vector<size_t> preprocess_input_size, std::vector<size_t> postprocess_output_size)

Constructs a ProcessingSpec with channel and size information.

Creates a processing specification with input/output channels and buffer sizes. Internal model latency defaults to zero for all tensors.

Parameters:
  • preprocess_input_channels – Number of input channels for each input tensor

  • preprocess_output_channels – Number of output channels for each output tensor

  • preprocess_input_size – Samples count required for preprocessing for each input tensor (0 = non-streamable)

  • postprocess_output_size – Samples count after the postprocessing for each output tensor (0 = non-streamable)

inline bool operator==(const ProcessingSpec &other) const

Equality comparison operator.

Parameters:

other – The ProcessingSpec instance to compare with

Returns:

true if all members are equal, false otherwise

inline bool operator!=(const ProcessingSpec &other) const

Inequality comparison operator.

Parameters:

other – The ProcessingSpec instance to compare with

Returns:

true if any members are not equal, false otherwise

Public Members

std::vector<size_t> m_preprocess_input_channels

Number of input channels for each input tensor.

std::vector<size_t> m_postprocess_output_channels

Number of output channels for each output tensor.

std::vector<size_t> m_preprocess_input_size

Samples count required for preprocessing for each input tensor (0 = non-streamable)

std::vector<size_t> m_postprocess_output_size

Samples count after the postprocessing for each output tensor (0 = non-streamable)

std::vector<size_t> m_internal_model_latency

Internal latency in samples for each output tensor.

std::vector<size_t> m_tensor_input_size

Total size (elements) of each input tensor (computed from shape)

std::vector<size_t> m_tensor_output_size

Total size (elements) of each output tensor (computed from shape)