Skip to content
GitHubDiscord

API Reference

Protocol for submitting and querying AI feedback.
async def submit_feedback(
    trace_id: str,
    score: float,
    comment: str | None = None,
    metadata: dict[str, Any] | None = None
) -> None

Submit feedback for an AI generation.

async def get_feedback_stats(
    model: str | None = None,
    provider: str | None = None
) -> dict[str, Any]

Query aggregate feedback statistics.


Persist and query collected feedback items.

All implementations must be async and treat save() as safe to call from request-handling hot-paths (synchronous returns acceptable for cache misses).

async def save(feedback: FeedbackItem) -> Result[str, Exception]

Persist a single feedback item.

Parameters
ParameterTypeDescription
`feedback`FeedbackItemThe feedback item to store.
Returns
TypeDescription
Result[str, Exception]Ok(feedback.id) on success, Err(exception) on failure.
async def find_by_session(session_id: str) -> list[FeedbackItem]

Retrieve all feedback items for a session.

Parameters
ParameterTypeDescription
`session_id`strSession identifier from feedback context.
Returns
TypeDescription
list[FeedbackItem]All collected items in that session, newest first.
async def find_by_type(
    feedback_type: FeedbackType,
    *,
    limit: int = 100
) -> list[FeedbackItem]

Retrieve feedback items of a given type.

Parameters
ParameterTypeDescription
`feedback_type`FeedbackTypeThe type to filter by.
`limit`intMaximum number of results (default 100).
Returns
TypeDescription
list[FeedbackItem]Matching items ordered by creation time descending.
async def aggregate(
    *,
    window_hours: int = 24
) -> FeedbackSummary

Compute summary statistics for items in a time window.

Parameters
ParameterTypeDescription
`window_hours`intLook-back window in hours (default 24).
Returns
TypeDescription
FeedbackSummaryAggregated FeedbackSummary statistics.

Write-through cache: cold reads from *store*, hot reads from *cache*.

Session-scoped and type-scoped result lists are cached with short TTLs. Any save call invalidates the relevant cache entries so subsequent reads always reflect the latest data.

Parameters
ParameterTypeDescription
`store`Durable backing store (e.g. DatabaseFeedbackStore).
`cache`Cache backend used for hot reads.
def __init__(
    store: FeedbackStoreProtocol,
    cache: CacheBackendProtocol
) -> None
async def save(feedback: FeedbackItem) -> Result[str, FeedbackError]

Persist feedback and invalidate related cache entries.

Parameters
ParameterTypeDescription
`feedback`FeedbackItemItem to persist.
Returns
TypeDescription
Result[str, FeedbackError]Result from the backing store.
async def find_by_session(session_id: str) -> list[FeedbackItem]

Return items for session_id, using cache when available.

Parameters
ParameterTypeDescription
`session_id`strSession identifier.
Returns
TypeDescription
list[FeedbackItem]Feedback items for the session, newest first.
async def find_by_type(
    feedback_type: FeedbackType,
    *,
    limit: int = 100
) -> list[FeedbackItem]

Return items of feedback_type, using cache when available.

Parameters
ParameterTypeDescription
`feedback_type`FeedbackTypeType to filter by.
`limit`intMaximum result count.
Returns
TypeDescription
list[FeedbackItem]Matching feedback items, newest first.
async def aggregate(
    *,
    window_hours: int = 24
) -> FeedbackSummary

Delegate aggregation to the backing store — not cached.

Parameters
ParameterTypeDescription
`window_hours`intLook-back window in hours.
Returns
TypeDescription
FeedbackSummaryFeedbackSummary.

Persist and query feedback items via DatabaseProviderProtocol.

The backing table (ai_feedback) is created lazily on first write. Indexed columns: session_id, type, created_at.

Parameters
ParameterTypeDescription
`provider`DI-injected database provider.
def __init__(provider: DatabaseProviderProtocol) -> None
async def save(feedback: FeedbackItem) -> Result[str, FeedbackError]

Persist feedback and return its ID on success.

Parameters
ParameterTypeDescription
`feedback`FeedbackItemItem to store.
Returns
TypeDescription
Result[str, FeedbackError]``Ok(feedback.id)`` on success, ``Err(FeedbackError)`` on failure.
async def find_by_session(session_id: str) -> list[FeedbackItem]

Return all items collected during session_id, newest first.

Parameters
ParameterTypeDescription
`session_id`strSession identifier to filter by.
Returns
TypeDescription
list[FeedbackItem]Matching feedback items ordered by creation time descending.
async def find_by_type(
    feedback_type: FeedbackType,
    *,
    limit: int = 100
) -> list[FeedbackItem]

Return feedback items of feedback_type, newest first.

Parameters
ParameterTypeDescription
`feedback_type`FeedbackTypeType to filter by.
`limit`intMaximum number of results (default: 100).
Returns
TypeDescription
list[FeedbackItem]Matching feedback items ordered by creation time descending.
async def aggregate(
    *,
    window_hours: int = 24
) -> FeedbackSummary

Compute summary statistics for items in the last window_hours hours.

Parameters
ParameterTypeDescription
`window_hours`intLook-back window in hours (default: 24).
Returns
TypeDescription
FeedbackSummaryAggregated FeedbackSummary for the window.

Collects and stores user feedback.

Provides methods to capture different types of feedback and integrate with ML retraining pipelines.

Example

collector = FeedbackCollector()

await collector.collect_rating( … rating=5, … context={“model”: “gpt-4”, “input”: “Hello”} … )

await collector.collect_correction( … original=“incorrect output”, … corrected=“correct output”, … context={“model_id”: “123”} … )

items = collector.get_feedback()

def __init__(storage: FeedbackStoreProtocol | None = None)

Initialize feedback collector.

Parameters
ParameterTypeDescription
`storage`FeedbackStoreProtocol | NoneOptional storage backend (e.g., database, file) If None, uses in-memory storage
async def collect_rating(
    rating: float,
    context: dict[str, Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> str

Collect a rating feedback.

Parameters
ParameterTypeDescription
`rating`floatNumeric rating value
`context`dict[str, Any] | NoneContext about what was rated
`metadata`dict[str, Any] | NoneAdditional metadata
Returns
TypeDescription
strFeedback ID
async def collect_text(
    text: str,
    context: dict[str, Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> str

Collect text feedback.

Parameters
ParameterTypeDescription
`text`strFeedback text
`context`dict[str, Any] | NoneContext about what feedback is for
`metadata`dict[str, Any] | NoneAdditional metadata
Returns
TypeDescription
strFeedback ID
async def collect_correction(
    original: str,
    corrected: str,
    context: dict[str, Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> str

Collect a correction feedback.

Parameters
ParameterTypeDescription
`original`strOriginal (incorrect) output
`corrected`strCorrected output
`context`dict[str, Any] | NoneContext
`metadata`dict[str, Any] | NoneAdditional metadata
Returns
TypeDescription
strFeedback ID
async def collect_label(
    label: Any,
    input_data: Any,
    context: dict[str, Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> str

Collect ground truth label.

Useful for supervised learning and model retraining.

Parameters
ParameterTypeDescription
`label`AnyGround truth label
`input_data`AnyInput that this label corresponds to
`context`dict[str, Any] | NoneContext
`metadata`dict[str, Any] | NoneAdditional metadata
Returns
TypeDescription
strFeedback ID
async def get_feedback(
    feedback_type: FeedbackType | None = None,
    limit: int | None = None
) -> list[FeedbackItem]

Get feedback items.

Parameters
ParameterTypeDescription
`feedback_type`FeedbackType | NoneOptional filter by type
`limit`int | NoneMaximum number of items to return
Returns
TypeDescription
list[FeedbackItem]List of feedback items from memory or the storage backend.
async def get_feedback_dict(
    feedback_type: FeedbackType | None = None,
    limit: int | None = None
) -> list[dict[str, Any]]

Get feedback as dictionaries.

Parameters
ParameterTypeDescription
`feedback_type`FeedbackType | NoneOptional filter by type
`limit`int | NoneMaximum number of items
Returns
TypeDescription
list[dict[str, Any]]List of feedback dictionaries
def clear() -> None

Clear all feedback from the in-memory buffer.


Configuration for AI feedback collection.

Loaded from the ai_feedback: key in application.yaml, with environment variable overrides via LEX_AI_FEEDBACK__* prefix.

def validate_for_environment(env: Environment) -> list[ConfigIssue]

Check config is safe for the target environment.


Context manager for feedback collection during operations.

Automatically captures context and results for feedback.

Example

collector = FeedbackCollector() async with FeedbackContext(collector, operation=“prediction”) as ctx: … result = await model.predict(input_data) … ctx.set_result(result)

def __init__(
    collector: FeedbackCollector,
    operation: str,
    metadata: dict[str, Any] | None = None
)

Initialize feedback context.

Parameters
ParameterTypeDescription
`collector`FeedbackCollectorFeedback collector
`operation`strOperation name
`metadata`dict[str, Any] | NoneAdditional metadata
def set_input(input_data: Any) -> None

Set input data.

Parameters
ParameterTypeDescription
`input_data`AnyInput to the operation
def set_result(result: Any) -> None

Set operation result.

Parameters
ParameterTypeDescription
`result`AnyResult of the operation

A single feedback item.

Attributes: feedback_type: Type of feedback (rating, text, correction, or label). value: The feedback value (e.g. rating score, text comment). context: Context about what was being evaluated (session_id, model, etc.). metadata: Additional metadata dictionary. id: Unique feedback identifier. created_at: Timestamp when feedback was created.

property type() -> FeedbackType

Alias for feedback_type for backward compatibility.

Returns
TypeDescription
FeedbackTypeThe feedback type.
def to_dict() -> dict[str, Any]

Convert to dictionary.

Returns
TypeDescription
dict[str, Any]Dictionary representation with ISO-formatted timestamp.

Middleware for automatic feedback capture.

Captures prediction inputs/outputs automatically and provides hooks for user feedback collection.

Example

from lexigram.app import Application from lexigram.ai.feedback import FeedbackMiddleware, FeedbackCollector

app = Application() collector = FeedbackCollector() middleware = FeedbackMiddleware(collector) app.use(middleware)

def __init__(
    collector: FeedbackCollector,
    capture_inputs: bool = True,
    capture_outputs: bool = True,
    capture_metadata: bool = True,
    registry: FeedbackProcessorRegistry | None = None
)

Initialize feedback middleware.

Parameters
ParameterTypeDescription
`collector`FeedbackCollectorFeedback collector instance
`capture_inputs`boolWhether to capture request inputs
`capture_outputs`boolWhether to capture response outputs
`capture_metadata`boolWhether to capture additional metadata
`registry`FeedbackProcessorRegistry | NoneFeedback processor registry (defaults to one with built-in processors)
def create_feedback_endpoint() -> Callable

Create a feedback submission endpoint.

Returns
TypeDescription
CallableAsync handler function for feedback submission

Example

middleware = FeedbackMiddleware(collector) feedback_handler = middleware.create_feedback_endpoint()


AI Feedback collection and processing integration.

Call configure to register a FeedbackProtocol implementation along with the processor registry and storage backend for injection.

Usage

from lexigram.ai.feedback.config import FeedbackConfig
@module(
imports=[
FeedbackModule.configure(
FeedbackConfig(storage="database")
)
]
)
class AppModule(Module):
pass

Error Handling

The feedback services use the Result pattern for expected failures.
Domain errors are available via the exported exception hierarchy::
from lexigram.ai.feedback.exceptions import (
FeedbackError, # base — catch-all
FeedbackProcessingError, # processor pipeline failure
FeedbackValidationError, # schema / data-validation failure
)

Exports: FeedbackProtocol, FeedbackError, FeedbackProcessingError, FeedbackValidationError

def configure(
    cls,
    config: FeedbackConfig | None = None
) -> DynamicModule

Create a FeedbackModule with the given configuration.

Parameters
ParameterTypeDescription
`config`FeedbackConfig | NoneFeedbackConfig, a plain ``dict`` of the same keys, or ``None`` to read from environment variables.
Returns
TypeDescription
DynamicModuleA DynamicModule descriptor.
def stub(
    cls,
    config: FeedbackConfig | None = None
) -> DynamicModule

Create a FeedbackModule suitable for unit and integration testing.

Uses in-memory or no-op implementations with minimal side effects.

Parameters
ParameterTypeDescription
`config`FeedbackConfig | NoneOptional config override. Uses safe test defaults when None.
Returns
TypeDescription
DynamicModuleA DynamicModule descriptor.

Payload fired when feedback processing finishes.

Provider for AI Feedback.

Registers FeedbackCollector and FeedbackProcessorRegistry. Wires a durable storage backend (DB-backed, optionally cached) into the collector during boot.

def __init__(config: FeedbackConfig | dict | None = None) -> None
def from_config(
    cls,
    config: FeedbackConfig,
    **context: object
) -> FeedbackProvider

Factory method for DI container setup.

async def register(container: ContainerRegistrarProtocol) -> None

Register the feedback services.

async def boot(container: ContainerResolverProtocol) -> None

Wire a durable storage backend into FeedbackCollector.

async def shutdown() -> None

Shutdown phase.

async def health_check(timeout: float = 5.0) -> HealthCheckResult

Health check — always healthy (in-process domain provider).

No external backend to ping.

Parameters
ParameterTypeDescription
`timeout`floatIgnored for in-process providers.
Returns
TypeDescription
HealthCheckResultAlways HEALTHY — no external backend to ping.

High-level feedback submission and querying service.

Implements FeedbackProtocol. Wraps a FeedbackStoreProtocol with trace-id correlation and aggregation logic.

Parameters
ParameterTypeDescription
`store`Optional storage backend; when ``None`` the service operates in a degraded no-op mode and logs warnings instead of persisting.
def __init__(store: FeedbackStoreProtocol | None = None) -> None
async def submit_feedback(
    trace_id: str,
    score: float,
    comment: str | None = None,
    metadata: dict[str, Any] | None = None
) -> None

Submit feedback for a traced AI generation.

Parameters
ParameterTypeDescription
`trace_id`strIdentifier of the AI generation trace.
`score`floatNumeric feedback score (e.g. 0.0–1.0 or 1–5).
`comment`str | NoneOptional free-text comment stored in metadata.
`metadata`dict[str, Any] | NoneOptional additional key-value metadata.
async def get_feedback_stats(
    model: str | None = None,
    provider: str | None = None
) -> dict[str, Any]

Get aggregate feedback statistics.

Parameters
ParameterTypeDescription
`model`str | NoneOptional model name for context (currently informational).
`provider`str | NoneOptional provider name for context (currently informational).
Returns
TypeDescription
dict[str, Any]Dictionary with ``total_count``, ``average_rating``, and ``by_type`` breakdown, plus optional ``model``/``provider`` keys when supplied.

Payload fired when feedback is persisted by a feedback store.

Emitted when user feedback is submitted and persisted.

Consumed by: quality tracking, model improvement, audit.


Payload fired when feedback is submitted to the feedback pipeline.

Aggregated statistics for collected feedback items.

Attributes: total_count: Total number of feedback items in the window. average_rating: Mean rating across all RATING-type items, or None if no ratings were collected. count_by_type: Item count keyed by FeedbackType enum value (e.g. “rating”, “text”, “correction”, “label”).


Type of feedback collected.

Base exception for all feedback-related errors.

Raised when a feedback processor fails.

Raised when feedback data fails validation.