Skip to content
GitHubDiscord

API Reference

Complete configuration for Lexigram Intelligence.

Attributes: name: Configuration name (default: “ai”) enabled: Whether AI features are enabled llm: LLM configuration vector: Vector store configuration rag: RAG pipeline configuration governance: AI governance configuration observability: Observability configuration subsystems: Dynamic configuration for third-party AI subsystems

def get_provider_class(cls) -> type[ProviderProtocol]

Return the provider class for this config.

def validate_production_security() -> AIConfig

Block insecure AI configurations in production.


Core AI layer: LLM orchestration, RAG pipelines, and governance.

Call configure to configure the AI subsystem. Sub-modules such as LLMModule and RAGModule may be imported independently for more granular control.

Usage

from lexigram.ai.config import AIConfig
from lexigram.ai.llm.config import ClientConfig
@module(
imports=[
AIModule.configure(
AIConfig(llm=ClientConfig(provider="openai", model="gpt-4o"))
)
]
)
class AppModule(Module):
pass
def configure(
    cls,
    config: Any | None = None,
    **kwargs: Any
) -> DynamicModule

Create an AIModule with explicit configuration.

Parameters
ParameterTypeDescription
`config`Any | NoneAIConfig or ``None`` for framework defaults. **kwargs: Additional keyword arguments forwarded to AIProvider.
Returns
TypeDescription
DynamicModuleA DynamicModule descriptor.
def stub(
    cls,
    config: Any = None
) -> DynamicModule

Return a no-op AIModule for testing.

Registers all AI sub-modules with their testing stubs: agents, LLM, RAG, memory, prompts, skills, sessions, feedback, governance, and guards with minimal side effects.

Returns
TypeDescription
DynamicModuleA DynamicModule with all AI sub-modules in testing configuration.

Provider for registering Intelligence services with Lexigram DI container.

This provider orchestrates sub-providers (LLMProvider, VectorProvider, RAGProvider) and is solely responsible for monitoring, governance, and AIProvider-specific services (RAGCache).

Example

from lexigram.app import Application from lexigram.ai import AIModule

app = Application() app.add_module(AIModule.configure(…))

@Controller(“/chat”) class ChatController: … def init(self, llm: LLMClientProtocol): … self.llm = llm

def __init__(
    config: AIConfig | None = None,
    llm_config: ClientConfig | None = None,
    vector_config: VectorConfig | None = None,
    name: str = 'ai'
) -> None

Initialize the Intelligence Provider.

Parameters
ParameterTypeDescription
`config`AIConfig | NoneInitial AI configuration (optional; can be set by orchestrator)
`llm_config`ClientConfig | NoneLLM-specific configuration (overrides config.llm)
`vector_config`VectorConfig | NoneVector-specific configuration (overrides config.vector)
`name`strProvider name
property intelligence_config() -> AIConfig

Get the current AI configuration (from override or container-provided config).

property database_provider() -> DatabaseProviderProtocol | None

Get the resolved database provider (set during boot).

property cache_backend() -> CacheBackendProtocol | None

Get the resolved cache backend (set during boot).

async def register(container: ContainerRegistrarProtocol) -> None

Register services with the DI container.

Registers monitoring, governance, and config singletons directly. Delegates LLM, Vector, and RAG service registration to the respective sub-providers.

Parameters
ParameterTypeDescription
`container`ContainerRegistrarProtocolThe Lexigram DI container
async def chat(
    messages: list[Any],
    tools: list[Any] | None = None,
    **kwargs: Any
) -> Any

Chat with optional tool calling. Delegates to LLM sub-provider’s client.

async def boot(container: ContainerResolverProtocol) -> None

Start the intelligence provider.

Performs async I/O only for AIProvider-specific services: RAGCache. Sub-providers handle their own async initialization during register().

Parameters
ParameterTypeDescription
`container`ContainerResolverProtocolThe DI container
async def shutdown() -> None

Clean up resources on application shutdown.

async def health_check(timeout: float = 5.0) -> HealthCheckResult

Check provider health by aggregating sub-provider and local service checks.

Returns
TypeDescription
HealthCheckResultStructured HealthCheckResult with component health information