Skip to content
GitHubDiscord

AI Observability (lexigram-ai-observability)

AI observability for the Lexigram Framework — tracing, metrics, and monitoring


AI-layer observability for the Lexigram Framework. Provides tracing, metrics, health monitoring, and decorator-based instrumentation for LLM calls, RAG operations, and vector store interactions — all wired through the DI container via ObservabilityModule. Zero-config usage starts with sensible defaults.

Terminal window
uv add lexigram-ai-observability
# Optional extras
uv add "lexigram-ai-observability[opentelemetry]"
from lexigram import Application
from lexigram.di.module import Module, module
from lexigram.ai.observability import ObservabilityModule
from lexigram.ai.observability.config import ObservabilityConfig
@module(imports=[
ObservabilityModule.configure(
ObservabilityConfig(
enabled=True,
metrics_enabled=True,
tracing_enabled=True,
health_checks_enabled=True,
)
)
])
class AppModule(Module):
pass
app = Application(modules=[AppModule])
if __name__ == "__main__":
app.run()

Zero-config usage: Call ObservabilityModule.configure() with no arguments to use defaults.

application.yaml
ai_observability:
enabled: true
metrics_enabled: true
tracing_enabled: true
health_checks_enabled: true
Section titled “Option 2 — Profiles + Environment Variables (recommended)”
Terminal window
export LEX_AI_OBSERVABILITY__ENABLED=true
# Environment variables for each field
from lexigram.ai.observability.config import ObservabilityConfig
from lexigram.ai.observability import ObservabilityModule
config = ObservabilityConfig(
enabled=True,
metrics_enabled=True,
tracing_enabled=True,
health_checks_enabled=True,
)
ObservabilityModule.configure(config)
FieldDefaultEnv varDescription
enabledTrueLEX_AI_OBSERVABILITY__ENABLEDMaster on/off switch for all observability
metrics_enabledTrueLEX_AI_OBSERVABILITY__METRICS_ENABLEDEnable metrics collection
tracing_enabledTrueLEX_AI_OBSERVABILITY__TRACING_ENABLEDEnable distributed tracing
health_checks_enabledTrueLEX_AI_OBSERVABILITY__HEALTH_CHECKS_ENABLEDEnable background health checking
MethodDescription
ObservabilityModule.configure(config)Fully-configured observability module
ObservabilityModule.stub()No-op observability for testing
  • Tracing: Distributed tracing for LLM calls, RAG pipeline stages, and vector store queries
  • Metrics: Token usage, latency, error rates, and cache hit ratios
  • Health monitoring: Background health checks for AI components
  • Decorators: @trace_llm, @trace_rag, @track_llm_call for automatic instrumentation
  • Observable wrappers: ObservableLLMClient and ObservableVectorStore
  • OpenTelemetry support: Optional OpenTelemetry integration
async with Application.boot(modules=[ObservabilityModule.stub()]) as app:
# your test code
...
FileWhat it contains
src/lexigram/ai/observability/module.pyModule factory — configure() and stub()
src/lexigram/ai/observability/config.pyObservabilityConfig — environment-aware settings
src/lexigram/ai/observability/di/provider.pyObservabilityProvider — registers observability services
src/lexigram/ai/observability/tracing/AITracer — distributed tracing for AI operations
src/lexigram/ai/observability/metrics/AIMetrics — token usage, latency, error rates
src/lexigram/ai/observability/health/AIHealthMonitor — background health checks
src/lexigram/ai/observability/decorators.py@trace_llm, @trace_rag, @track_llm_call
src/lexigram/ai/observability/exceptions.pyTyped exceptions