Class anira::Context¶
-
class Context¶
Collaboration diagram for anira::Context:
Singleton context class managing global inference resources and session coordination.
The Context class serves as a singleton manager for all neural network inference resources, including thread pools, backend processors, and session management. It provides centralized coordination for multiple inference sessions while maintaining efficient resource sharing and thread safety across the entire inference system.
Key responsibilities:
Managing singleton instance lifecycle and configuration
Coordinating inference thread pool with configurable size
Managing backend processor instances (LibTorch, ONNX, TensorFlow Lite)
Session creation, management, and cleanup
Thread-safe concurrent queue management for inference requests
Resource pooling and efficient allocation/deallocation
The Context uses a singleton pattern to ensure:
Global resource coordination across multiple inference instances
Efficient sharing of expensive resources (thread pools)
Centralized configuration and lifecycle management
Thread-safe access to shared components
Note
This class is thread-safe and manages its own lifecycle. All access should be through the static interface methods rather than direct instantiation.
Public Functions
-
Context(const ContextConfig &context_config)¶
Constructor that initializes the context with specified configuration.
Creates a new context instance with the provided configuration settings. This constructor is should not be called directly. Use get_instance() to obtain a context instance.
- Parameters:
context_config – Configuration settings for thread pool size, backend preferences, etc.
-
~Context()¶
Destructor that cleans up all context resources.
Properly shuts down the thread pool, releases all backend processors, and cleans up any remaining sessions or inference data.
Prepares a session for processing with new audio configuration.
Configures the specified session with new audio host settings and optional custom latency values. This method handles buffer allocation, latency calculation, and session state updates.
- Parameters:
session – Shared pointer to the session to prepare
new_config – New host configuration with audio settings
custom_latency – Optional vector of custom latency values for each tensor
Notifies the context that new data has been submitted for a session.
Signals to the inference system that new audio data is available for processing by the specified session. This triggers the inference pipeline to begin processing the submitted data.
- Parameters:
session – Shared pointer to the session that has new data available
Requests new data processing for a session with specified buffer duration.
Requests that the inference system process data for the specified session with the given buffer duration in seconds. This is used for scheduling and managing inference operations.
- Parameters:
session – Shared pointer to the session requesting data processing
buffer_size_in_sec – Duration of the buffer to process in seconds
Resets a session to its initial state.
Clears all internal buffers, resets the inference pipeline, and prepares the session for a new processing session. This method is typically used to reinitialize a session without releasing it completely.
- Parameters:
session – Shared pointer to the session to reset
Public Static Functions
-
static std::shared_ptr<Context> get_instance(const ContextConfig &context_config)¶
Gets or creates the singleton context instance.
Returns the existing context instance or creates a new one with the specified configuration if none exists. This is the primary method for accessing the global inference cntext.
Note
If a context already exists, the provided configuration is ignored. The configuration is only used when creating a new instance.
- Parameters:
context_config – Configuration settings for the context (used only on first creation)
- Returns:
Shared pointer to the singleton context instance
-
static std::shared_ptr<SessionElement> create_session(PrePostProcessor &pp_processor, InferenceConfig &inference_config, BackendBase *custom_processor)¶
Creates a new inference session with specified components.
Creates and registers a new inference session with the provided preprocessing/ postprocessing pipeline, inference configuration, and optional custom backend. The session is automatically assigned a unique ID and integrated into the global resource management system.
- Parameters:
pp_processor – Reference to the preprocessing/postprocessing pipeline
inference_config – Reference to the inference configuration
custom_processor – Pointer to custom backend processor (nullptr for default backends)
- Returns:
Shared pointer to the newly created session
Releases an inference session and its resources.
Properly shuts down and releases the specified session, including cleanup of associated backend processors, buffers, and other resources.
- Parameters:
session – Shared pointer to the session to release
-
static void release_instance()¶
Releases the singleton context instance.
Shuts down and releases the global context instance, including all sessions, thread pools, and backend processors. This should be called during application shutdown to ensure proper cleanup.
-
static void release_thread_pool()¶
Releases the inference thread pool.
Shuts down all inference threads and releases thread pool resources. This is typically called as part of context cleanup or reconfiguration.
-
static int get_num_sessions()¶
Gets the number of active inference sessions.
Returns the current count of active inference sessions managed by the context. This is useful for monitoring and debugging purposes.
- Returns:
Number of currently active sessions
-
static std::vector<std::shared_ptr<SessionElement>> &get_sessions()¶
Gets a reference to all active sessions.
Returns a reference to the vector containing all currently active inference sessions. This method is primarily used for internal management and debugging.
Note
This method provides direct access to internal data structures and should be used carefully to avoid disrupting session management.
- Returns:
Reference to the vector of active session shared pointers