API Reference
Protocols
Section titled “Protocols”FeedbackProtocol
Section titled “FeedbackProtocol”Protocol for submitting and querying AI feedback.
async def submit_feedback( trace_id: str, score: float, comment: str | None = None, metadata: dict[str, Any] | None = None ) -> None
Submit feedback for an AI generation.
async def get_feedback_stats( model: str | None = None, provider: str | None = None ) -> dict[str, Any]
Query aggregate feedback statistics.
FeedbackStoreProtocol
Section titled “FeedbackStoreProtocol”Persist and query collected feedback items.
All implementations must be async and treat save() as safe to call from request-handling hot-paths (synchronous returns acceptable for cache misses).
async def save(feedback: FeedbackItem) -> Result[str, Exception]
Persist a single feedback item.
| Parameter | Type | Description |
|---|---|---|
| `feedback` | FeedbackItem | The feedback item to store. |
| Type | Description |
|---|---|
| Result[str, Exception] | Ok(feedback.id) on success, Err(exception) on failure. |
async def find_by_session(session_id: str) -> list[FeedbackItem]
Retrieve all feedback items for a session.
| Parameter | Type | Description |
|---|---|---|
| `session_id` | str | Session identifier from feedback context. |
| Type | Description |
|---|---|
| list[FeedbackItem] | All collected items in that session, newest first. |
async def find_by_type( feedback_type: FeedbackType, *, limit: int = 100 ) -> list[FeedbackItem]
Retrieve feedback items of a given type.
| Parameter | Type | Description |
|---|---|---|
| `feedback_type` | FeedbackType | The type to filter by. |
| `limit` | int | Maximum number of results (default 100). |
| Type | Description |
|---|---|
| list[FeedbackItem] | Matching items ordered by creation time descending. |
async def aggregate( *, window_hours: int = 24 ) -> FeedbackSummary
Compute summary statistics for items in a time window.
| Parameter | Type | Description |
|---|---|---|
| `window_hours` | int | Look-back window in hours (default 24). |
| Type | Description |
|---|---|
| FeedbackSummary | Aggregated FeedbackSummary statistics. |
Classes
Section titled “Classes”CachedFeedbackStore
Section titled “CachedFeedbackStore”Write-through cache: cold reads from *store*, hot reads from *cache*.
Session-scoped and type-scoped result lists are cached with short TTLs. Any save call invalidates the relevant cache entries so subsequent reads always reflect the latest data.
| Parameter | Type | Description |
|---|---|---|
| `store` | Durable backing store (e.g. DatabaseFeedbackStore). | |
| `cache` | Cache backend used for hot reads. |
def __init__( store: FeedbackStoreProtocol, cache: CacheBackendProtocol ) -> None
async def save(feedback: FeedbackItem) -> Result[str, FeedbackError]
Persist feedback and invalidate related cache entries.
| Parameter | Type | Description |
|---|---|---|
| `feedback` | FeedbackItem | Item to persist. |
| Type | Description |
|---|---|
| Result[str, FeedbackError] | Result from the backing store. |
async def find_by_session(session_id: str) -> list[FeedbackItem]
Return items for session_id, using cache when available.
| Parameter | Type | Description |
|---|---|---|
| `session_id` | str | Session identifier. |
| Type | Description |
|---|---|
| list[FeedbackItem] | Feedback items for the session, newest first. |
async def find_by_type( feedback_type: FeedbackType, *, limit: int = 100 ) -> list[FeedbackItem]
Return items of feedback_type, using cache when available.
| Parameter | Type | Description |
|---|---|---|
| `feedback_type` | FeedbackType | Type to filter by. |
| `limit` | int | Maximum result count. |
| Type | Description |
|---|---|
| list[FeedbackItem] | Matching feedback items, newest first. |
async def aggregate( *, window_hours: int = 24 ) -> FeedbackSummary
Delegate aggregation to the backing store — not cached.
| Parameter | Type | Description |
|---|---|---|
| `window_hours` | int | Look-back window in hours. |
| Type | Description |
|---|---|
| FeedbackSummary | FeedbackSummary. |
DatabaseFeedbackStore
Section titled “DatabaseFeedbackStore”Persist and query feedback items via DatabaseProviderProtocol.
The backing table (ai_feedback) is created lazily on first write.
Indexed columns: session_id, type, created_at.
| Parameter | Type | Description |
|---|---|---|
| `provider` | DI-injected database provider. |
async def save(feedback: FeedbackItem) -> Result[str, FeedbackError]
Persist feedback and return its ID on success.
| Parameter | Type | Description |
|---|---|---|
| `feedback` | FeedbackItem | Item to store. |
| Type | Description |
|---|---|
| Result[str, FeedbackError] | ``Ok(feedback.id)`` on success, ``Err(FeedbackError)`` on failure. |
async def find_by_session(session_id: str) -> list[FeedbackItem]
Return all items collected during session_id, newest first.
| Parameter | Type | Description |
|---|---|---|
| `session_id` | str | Session identifier to filter by. |
| Type | Description |
|---|---|
| list[FeedbackItem] | Matching feedback items ordered by creation time descending. |
async def find_by_type( feedback_type: FeedbackType, *, limit: int = 100 ) -> list[FeedbackItem]
Return feedback items of feedback_type, newest first.
| Parameter | Type | Description |
|---|---|---|
| `feedback_type` | FeedbackType | Type to filter by. |
| `limit` | int | Maximum number of results (default: 100). |
| Type | Description |
|---|---|
| list[FeedbackItem] | Matching feedback items ordered by creation time descending. |
async def aggregate( *, window_hours: int = 24 ) -> FeedbackSummary
Compute summary statistics for items in the last window_hours hours.
| Parameter | Type | Description |
|---|---|---|
| `window_hours` | int | Look-back window in hours (default: 24). |
| Type | Description |
|---|---|
| FeedbackSummary | Aggregated FeedbackSummary for the window. |
FeedbackCollector
Section titled “FeedbackCollector”Collects and stores user feedback.
Provides methods to capture different types of feedback and integrate with ML retraining pipelines.
Example
collector = FeedbackCollector()
Collect rating
Section titled “Collect rating”await collector.collect_rating( … rating=5, … context={“model”: “gpt-4”, “input”: “Hello”} … )
Collect correction
Section titled “Collect correction”await collector.collect_correction( … original=“incorrect output”, … corrected=“correct output”, … context={“model_id”: “123”} … )
Get all feedback
Section titled “Get all feedback”items = collector.get_feedback()
def __init__(storage: FeedbackStoreProtocol | None = None)
Initialize feedback collector.
| Parameter | Type | Description |
|---|---|---|
| `storage` | FeedbackStoreProtocol | None | Optional storage backend (e.g., database, file) If None, uses in-memory storage |
async def collect_rating( rating: float, context: dict[str, Any] | None = None, metadata: dict[str, Any] | None = None ) -> str
Collect a rating feedback.
| Parameter | Type | Description |
|---|---|---|
| `rating` | float | Numeric rating value |
| `context` | dict[str, Any] | None | Context about what was rated |
| `metadata` | dict[str, Any] | None | Additional metadata |
| Type | Description |
|---|---|
| str | Feedback ID |
async def collect_text( text: str, context: dict[str, Any] | None = None, metadata: dict[str, Any] | None = None ) -> str
Collect text feedback.
| Parameter | Type | Description |
|---|---|---|
| `text` | str | Feedback text |
| `context` | dict[str, Any] | None | Context about what feedback is for |
| `metadata` | dict[str, Any] | None | Additional metadata |
| Type | Description |
|---|---|
| str | Feedback ID |
async def collect_correction( original: str, corrected: str, context: dict[str, Any] | None = None, metadata: dict[str, Any] | None = None ) -> str
Collect a correction feedback.
| Parameter | Type | Description |
|---|---|---|
| `original` | str | Original (incorrect) output |
| `corrected` | str | Corrected output |
| `context` | dict[str, Any] | None | Context |
| `metadata` | dict[str, Any] | None | Additional metadata |
| Type | Description |
|---|---|
| str | Feedback ID |
async def collect_label( label: Any, input_data: Any, context: dict[str, Any] | None = None, metadata: dict[str, Any] | None = None ) -> str
Collect ground truth label.
Useful for supervised learning and model retraining.
| Parameter | Type | Description |
|---|---|---|
| `label` | Any | Ground truth label |
| `input_data` | Any | Input that this label corresponds to |
| `context` | dict[str, Any] | None | Context |
| `metadata` | dict[str, Any] | None | Additional metadata |
| Type | Description |
|---|---|
| str | Feedback ID |
async def get_feedback( feedback_type: FeedbackType | None = None, limit: int | None = None ) -> list[FeedbackItem]
Get feedback items.
| Parameter | Type | Description |
|---|---|---|
| `feedback_type` | FeedbackType | None | Optional filter by type |
| `limit` | int | None | Maximum number of items to return |
| Type | Description |
|---|---|
| list[FeedbackItem] | List of feedback items from memory or the storage backend. |
async def get_feedback_dict( feedback_type: FeedbackType | None = None, limit: int | None = None ) -> list[dict[str, Any]]
Get feedback as dictionaries.
| Parameter | Type | Description |
|---|---|---|
| `feedback_type` | FeedbackType | None | Optional filter by type |
| `limit` | int | None | Maximum number of items |
| Type | Description |
|---|---|
| list[dict[str, Any]] | List of feedback dictionaries |
Clear all feedback from the in-memory buffer.
FeedbackConfig
Section titled “FeedbackConfig”Configuration for AI feedback collection.
Loaded from the ai_feedback: key in application.yaml, with environment
variable overrides via LEX_AI_FEEDBACK__* prefix.
Check config is safe for the target environment.
FeedbackContext
Section titled “FeedbackContext”Context manager for feedback collection during operations.
Automatically captures context and results for feedback.
Example
collector = FeedbackCollector() async with FeedbackContext(collector, operation=“prediction”) as ctx: … result = await model.predict(input_data) … ctx.set_result(result)
Feedback context is automatically stored
Section titled “Feedback context is automatically stored”
def __init__( collector: FeedbackCollector, operation: str, metadata: dict[str, Any] | None = None )
Initialize feedback context.
| Parameter | Type | Description |
|---|---|---|
| `collector` | FeedbackCollector | Feedback collector |
| `operation` | str | Operation name |
| `metadata` | dict[str, Any] | None | Additional metadata |
Set input data.
| Parameter | Type | Description |
|---|---|---|
| `input_data` | Any | Input to the operation |
Set operation result.
| Parameter | Type | Description |
|---|---|---|
| `result` | Any | Result of the operation |
FeedbackItem
Section titled “FeedbackItem”A single feedback item.
Attributes: feedback_type: Type of feedback (rating, text, correction, or label). value: The feedback value (e.g. rating score, text comment). context: Context about what was being evaluated (session_id, model, etc.). metadata: Additional metadata dictionary. id: Unique feedback identifier. created_at: Timestamp when feedback was created.
property type() -> FeedbackType
Alias for feedback_type for backward compatibility.
| Type | Description |
|---|---|
| FeedbackType | The feedback type. |
Convert to dictionary.
| Type | Description |
|---|---|
| dict[str, Any] | Dictionary representation with ISO-formatted timestamp. |
FeedbackMiddleware
Section titled “FeedbackMiddleware”Middleware for automatic feedback capture.
Captures prediction inputs/outputs automatically and provides hooks for user feedback collection.
Example
from lexigram.app import Application from lexigram.ai.feedback import FeedbackMiddleware, FeedbackCollector
app = Application() collector = FeedbackCollector() middleware = FeedbackMiddleware(collector) app.use(middleware)
def __init__( collector: FeedbackCollector, capture_inputs: bool = True, capture_outputs: bool = True, capture_metadata: bool = True, registry: FeedbackProcessorRegistry | None = None )
Initialize feedback middleware.
| Parameter | Type | Description |
|---|---|---|
| `collector` | FeedbackCollector | Feedback collector instance |
| `capture_inputs` | bool | Whether to capture request inputs |
| `capture_outputs` | bool | Whether to capture response outputs |
| `capture_metadata` | bool | Whether to capture additional metadata |
| `registry` | FeedbackProcessorRegistry | None | Feedback processor registry (defaults to one with built-in processors) |
Create a feedback submission endpoint.
| Type | Description |
|---|---|
| Callable | Async handler function for feedback submission |
Example
middleware = FeedbackMiddleware(collector) feedback_handler = middleware.create_feedback_endpoint()
Use with your web framework:
Section titled “Use with your web framework:”app.post(“/feedback”, feedback_handler)
Section titled “app.post(“/feedback”, feedback_handler)”
FeedbackModule
Section titled “FeedbackModule”AI Feedback collection and processing integration.
Call configure to register a FeedbackProtocol implementation along with the processor registry and storage backend for injection.
Usage
from lexigram.ai.feedback.config import FeedbackConfig
@module( imports=[ FeedbackModule.configure( FeedbackConfig(storage="database") ) ])class AppModule(Module): passError Handling
The feedback services use the Result pattern for expected failures.Domain errors are available via the exported exception hierarchy::
from lexigram.ai.feedback.exceptions import ( FeedbackError, # base — catch-all FeedbackProcessingError, # processor pipeline failure FeedbackValidationError, # schema / data-validation failure )Exports: FeedbackProtocol, FeedbackError, FeedbackProcessingError, FeedbackValidationError
def configure( cls, config: FeedbackConfig | None = None ) -> DynamicModule
Create a FeedbackModule with the given configuration.
| Parameter | Type | Description |
|---|---|---|
| `config` | FeedbackConfig | None | FeedbackConfig, a plain ``dict`` of the same keys, or ``None`` to read from environment variables. |
| Type | Description |
|---|---|
| DynamicModule | A DynamicModule descriptor. |
def stub( cls, config: FeedbackConfig | None = None ) -> DynamicModule
Create a FeedbackModule suitable for unit and integration testing.
Uses in-memory or no-op implementations with minimal side effects.
| Parameter | Type | Description |
|---|---|---|
| `config` | FeedbackConfig | None | Optional config override. Uses safe test defaults when None. |
| Type | Description |
|---|---|
| DynamicModule | A DynamicModule descriptor. |
FeedbackProcessedHook
Section titled “FeedbackProcessedHook”Payload fired when feedback processing finishes.
FeedbackProvider
Section titled “FeedbackProvider”Provider for AI Feedback.
Registers FeedbackCollector and FeedbackProcessorRegistry. Wires a durable storage backend (DB-backed, optionally cached) into the collector during boot.
def __init__(config: FeedbackConfig | dict | None = None) -> None
def from_config( cls, config: FeedbackConfig, **context: object ) -> FeedbackProvider
Factory method for DI container setup.
async def register(container: ContainerRegistrarProtocol) -> None
Register the feedback services.
async def boot(container: ContainerResolverProtocol) -> None
Wire a durable storage backend into FeedbackCollector.
Shutdown phase.
Health check — always healthy (in-process domain provider).
No external backend to ping.
| Parameter | Type | Description |
|---|---|---|
| `timeout` | float | Ignored for in-process providers. |
| Type | Description |
|---|---|
| HealthCheckResult | Always HEALTHY — no external backend to ping. |
FeedbackService
Section titled “FeedbackService”High-level feedback submission and querying service.
Implements FeedbackProtocol. Wraps a FeedbackStoreProtocol with trace-id correlation and aggregation logic.
| Parameter | Type | Description |
|---|---|---|
| `store` | Optional storage backend; when ``None`` the service operates in a degraded no-op mode and logs warnings instead of persisting. |
def __init__(store: FeedbackStoreProtocol | None = None) -> None
async def submit_feedback( trace_id: str, score: float, comment: str | None = None, metadata: dict[str, Any] | None = None ) -> None
Submit feedback for a traced AI generation.
| Parameter | Type | Description |
|---|---|---|
| `trace_id` | str | Identifier of the AI generation trace. |
| `score` | float | Numeric feedback score (e.g. 0.0–1.0 or 1–5). |
| `comment` | str | None | Optional free-text comment stored in metadata. |
| `metadata` | dict[str, Any] | None | Optional additional key-value metadata. |
async def get_feedback_stats( model: str | None = None, provider: str | None = None ) -> dict[str, Any]
Get aggregate feedback statistics.
| Parameter | Type | Description |
|---|---|---|
| `model` | str | None | Optional model name for context (currently informational). |
| `provider` | str | None | Optional provider name for context (currently informational). |
| Type | Description |
|---|---|
| dict[str, Any] | Dictionary with ``total_count``, ``average_rating``, and ``by_type`` breakdown, plus optional ``model``/``provider`` keys when supplied. |
FeedbackStoredHook
Section titled “FeedbackStoredHook”Payload fired when feedback is persisted by a feedback store.
FeedbackSubmittedEvent
Section titled “FeedbackSubmittedEvent”Emitted when user feedback is submitted and persisted.
Consumed by: quality tracking, model improvement, audit.
FeedbackSubmittedHook
Section titled “FeedbackSubmittedHook”Payload fired when feedback is submitted to the feedback pipeline.
FeedbackSummary
Section titled “FeedbackSummary”Aggregated statistics for collected feedback items.
Attributes: total_count: Total number of feedback items in the window. average_rating: Mean rating across all RATING-type items, or None if no ratings were collected. count_by_type: Item count keyed by FeedbackType enum value (e.g. “rating”, “text”, “correction”, “label”).
FeedbackType
Section titled “FeedbackType”Type of feedback collected.
Exceptions
Section titled “Exceptions”FeedbackError
Section titled “FeedbackError”Base exception for all feedback-related errors.
FeedbackProcessingError
Section titled “FeedbackProcessingError”Raised when a feedback processor fails.
FeedbackValidationError
Section titled “FeedbackValidationError”Raised when feedback data fails validation.