Skip to content
GitHubDiscord

API Reference

Protocol for guard pipeline execution.

Orchestrates a chain of input and/or output guards, collecting results and determining the aggregate action.

async def check_input(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardResultProtocol, GuardError]

Run all input guards against the content.

Parameters
ParameterTypeDescription
`content`strInput text to guard.
`messages`list[Any] | NoneOptional structured messages.
`metadata`dict[str, Any] | NoneOptional request metadata.
Returns
TypeDescription
Result[GuardResultProtocol, GuardError]Aggregate guard result.
async def check_output(
    content: str,
    *,
    original_input: str | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardResultProtocol, GuardError]

Run all output guards against the content.

Parameters
ParameterTypeDescription
`content`strOutput text to guard.
`original_input`str | NoneThe original user input.
`metadata`dict[str, Any] | NoneOptional request metadata.
Returns
TypeDescription
Result[GuardResultProtocol, GuardError]Aggregate guard result.

Protocol for guard evaluation results.

Every guard returns a result indicating whether the content passed, was blocked, or triggered a warning.

property passed() -> bool

Whether the guard check passed.

property action() -> str

Action taken: ‘pass’, ‘block’, ‘warn’, or ‘redact’.

property guard_name() -> str

Name of the guard that produced this result.

property details() -> dict[str, Any]

Additional details about the guard evaluation.


Protocol for input content guards.

Input guards inspect user prompts and messages before they are sent to an LLM. They can block, warn, or redact content.

property name() -> str

GuardProtocol identifier.

async def check(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardResultProtocol, GuardError]

Evaluate input content against this guard’s rules.

Parameters
ParameterTypeDescription
`content`strThe raw text content to check.
`messages`list[Any] | NoneOptional structured message list for context.
`metadata`dict[str, Any] | NoneOptional metadata (user_id, model, etc.).
Returns
TypeDescription
Result[GuardResultProtocol, GuardError]A GuardCheckResult indicating pass/block/warn/redact.

Protocol for output content guards.

Output guards inspect LLM responses before they are returned to the caller. They can block, warn, or redact content.

property name() -> str

GuardProtocol identifier.

async def check(
    content: str,
    *,
    original_input: str | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardResultProtocol, GuardError]

Evaluate output content against this guard’s rules.

Parameters
ParameterTypeDescription
`content`strThe LLM response text to check.
`original_input`str | NoneThe original user input for context.
`metadata`dict[str, Any] | NoneOptional metadata (model, provider, etc.).
Returns
TypeDescription
Result[GuardResultProtocol, GuardError]A GuardCheckResult indicating pass/block/warn/redact.

Base class for all input content guards.

Subclasses implement check to evaluate a piece of input content and return a GuardCheckResult.

Parameters
ParameterTypeDescription
`action`Default action to take when this guard triggers (``"block"``, ``"warn"``, or ``"redact"``). Not all guards support all actions.
def __init__(action: str = 'block') -> None

Initialise the guard with a default action.

Parameters
ParameterTypeDescription
`action`strAction taken when the guard triggers.
property name() -> str

GuardProtocol identifier derived from the class name.

property action() -> str

Configured action for this guard.

async def check(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Evaluate the content and return a result.

Parameters
ParameterTypeDescription
`content`strRaw text content to evaluate.
`messages`list[Any] | NoneOptional structured message list for context.
`metadata`dict[str, Any] | NoneOptional metadata (user_id, model, etc.).
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Result[GuardCheckResult, GuardError] indicating the outcome.

Base class for all output content guards.

Subclasses implement check to evaluate LLM response content and return a GuardCheckResult.

Parameters
ParameterTypeDescription
`action`Default action to take when this guard triggers (``"block"``, ``"warn"``, or ``"redact"``).
def __init__(action: str = 'block') -> None

Initialise the guard with a default action.

Parameters
ParameterTypeDescription
`action`strAction taken when the guard triggers.
property name() -> str

GuardProtocol identifier derived from the class name.

property action() -> str

Configured action for this guard.

async def check(
    content: str,
    *,
    original_input: str | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Evaluate the LLM response and return a result.

Parameters
ParameterTypeDescription
`content`strLLM response text to evaluate.
`original_input`str | NoneThe original user input for context.
`metadata`dict[str, Any] | NoneOptional metadata (model, provider, etc.).
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Result[GuardCheckResult, GuardError] indicating the outcome.

Aggregate result from running multiple guards in a pipeline.

Combines results from all guards into a single verdict. The aggregate action is the most severe action from any guard (BLOCK > REDACT > WARN > PASS).

property blocked() -> bool

Whether any guard triggered a block action.

property redacted() -> bool

Whether any guard redacted content.

property warned() -> bool

Whether any guard emitted a warning.

property blocking_result() -> GuardCheckResult | None

Return the first blocking result, or None if none blocked.

def from_results(
    cls,
    results: list[GuardCheckResult],
    original_content: str
) -> AggregateGuardResult

Build an aggregate result from a list of individual results.

The aggregate action uses the most severe outcome. If multiple guards redacted content, only the last redaction is applied (guards should be ordered most-restrictive-first in the pipeline).

Parameters
ParameterTypeDescription
`results`list[GuardCheckResult]Individual guard check results.
`original_content`strOriginal input content before any redaction.
Returns
TypeDescription
AggregateGuardResultAggregated result.

Action to take on a guard result.

Result of a single guard evaluation.

Immutable result produced by one InputGuardProtocol or OutputGuardProtocol check.

def allow(
    cls,
    guard_name: str,
    **details: Any
) -> GuardCheckResult

Create a passing guard result.

Parameters
ParameterTypeDescription
`guard_name`strGuardProtocol identifier. **details: Optional extra diagnostics.
Returns
TypeDescription
GuardCheckResultGuardCheckResult with action=PASS.
def block(
    cls,
    guard_name: str,
    reason: str,
    **details: Any
) -> GuardCheckResult

Create a blocking guard result.

Parameters
ParameterTypeDescription
`guard_name`strGuardProtocol identifier.
`reason`strHuman-readable explanation of why the content was blocked. **details: Optional extra diagnostics.
Returns
TypeDescription
GuardCheckResultGuardCheckResult with action=BLOCK.
def warn(
    cls,
    guard_name: str,
    reason: str,
    **details: Any
) -> GuardCheckResult

Create a warning guard result.

Parameters
ParameterTypeDescription
`guard_name`strGuardProtocol identifier.
`reason`strHuman-readable explanation of why a warning was emitted. **details: Optional extra diagnostics.
Returns
TypeDescription
GuardCheckResultGuardCheckResult with action=WARN (passed=True).
def redact(
    cls,
    guard_name: str,
    redacted_content: str,
    reason: str,
    **details: Any
) -> GuardCheckResult

Create a redacting guard result.

Parameters
ParameterTypeDescription
`guard_name`strGuardProtocol identifier.
`redacted_content`strThe sanitized version of the original content.
`reason`strHuman-readable explanation of what was redacted. **details: Optional extra diagnostics.
Returns
TypeDescription
GuardCheckResultGuardCheckResult with action=REDACT (passed=True).

Configuration for the content safety guard pipeline.

Controls which guards are enabled by default and their behaviour. Guards can be further customised programmatically via GuardPipeline.

Attributes: enabled: Master switch — when False the guard pipeline is bypassed entirely. injection_detection: Enable the prompt injection detector. injection_action: Action on detected injection ("block"/"warn"). pii_detection: Enable PII detection on inputs. pii_action: Action on detected PII ("redact"/"block"/"warn"). pii_entities: PII entity types to detect. Empty list means all types. pii_redaction_output: Enable PII redaction on LLM outputs. max_input_chars: Maximum allowed input length in characters. 0 disables the length guard. max_output_chars: Maximum allowed output length in characters. 0 disables the output length guard. length_action: Action when length limit exceeded ("block"/"warn"). restricted_topics: Topics to block in user inputs.


Payload fired after an input guard evaluates content.

Module for content safety guard pipeline.

Registers the GuardProvider which builds and wires the configured guard pipeline into the container.

Usage

app = LexigramApplication(
modules=[..., GuardModule()],
)

Or with custom config

app = LexigramApplication(
modules=[
...,
GuardModule(
config=GuardConfig(
injection_detection=True,
pii_action="redact",
max_input_chars=8000,
)
)
]
)
def configure(
    cls,
    config: GuardConfig | None = None,
    enable_audit_logging: bool = True,
    **kwargs: Any
) -> DynamicModule

Create a GuardModule with explicit configuration.

Parameters
ParameterTypeDescription
`config`GuardConfig | NoneGuardConfig or ``None`` for defaults.
`enable_audit_logging`boolEmit structured audit log entries for every guard decision (allow, block, redact). Defaults to ``True``; set to ``False`` to reduce log volume in high-throughput environments. **kwargs: Additional keyword arguments forwarded to GuardProvider.
Returns
TypeDescription
DynamicModuleA DynamicModule descriptor.
def stub(
    cls,
    config: GuardConfig | None = None
) -> DynamicModule

Create a GuardModule suitable for unit and integration testing.

Uses a pass-through (allow-all) guard pipeline with no external API calls. Audit logging is disabled by default to keep test output clean.

Parameters
ParameterTypeDescription
`config`GuardConfig | NoneOptional GuardConfig override. Uses safe test defaults when ``None``.
Returns
TypeDescription
DynamicModuleA DynamicModule descriptor.

Payload fired after an output guard evaluates content.

Ordered chain of input and output guards.

Guards are evaluated in order. For input, content flows through each guard; if a guard redacts content, the redacted version is passed to subsequent guards. A BLOCK stops evaluation immediately.

Parameters
ParameterTypeDescription
`input_guards`Guards to run on user inputs before sending to LLM.
`output_guards`Guards to run on LLM outputs before returning to caller.

Example

pipeline = GuardPipeline(
input_guards=[
PromptInjectionDetector(action="block"),
PIIDetector(action="redact"),
],
output_guards=[
PIIRedactor(entities=["SSN", "CREDIT_CARD"]),
LengthGuard(max_chars=10000, action="block"),
],
)
result = await pipeline.check_input(user_message)
if result.blocked:
return Err(GuardViolationError(result))
safe_input = result.final_content or user_message
def __init__(
    input_guards: list[InputGuardProtocol] | None = None,
    output_guards: list[OutputGuardProtocol] | None = None
) -> None

Initialise the guard pipeline.

Parameters
ParameterTypeDescription
`input_guards`list[InputGuardProtocol] | NoneOrdered list of input guards to apply.
`output_guards`list[OutputGuardProtocol] | NoneOrdered list of output guards to apply.
def add_input_guard(guard: InputGuardProtocol) -> None

Append an input guard to the pipeline.

Parameters
ParameterTypeDescription
`guard`InputGuardProtocolGuardProtocol to add.
def add_output_guard(guard: OutputGuardProtocol) -> None

Append an output guard to the pipeline.

Parameters
ParameterTypeDescription
`guard`OutputGuardProtocolGuardProtocol to add.
async def check_input(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None,
    parallel: bool = False
) -> Result[AggregateGuardResult, GuardError]

Run all input guards against the content.

Guards are applied in order. Redacted content is forwarded to subsequent guards. A BLOCK terminates evaluation immediately. If parallel is True, all guards are run concurrently and redaction chaining is disabled (each operates on the original content).

Parameters
ParameterTypeDescription
`content`strRaw input text to evaluate.
`messages`list[Any] | NoneOptional structured message list for context.
`metadata`dict[str, Any] | NoneOptional request metadata (user_id, model, etc.).
`parallel`boolWhether to run guards concurrently with asyncio.gather.
Returns
TypeDescription
Result[AggregateGuardResult, GuardError]Ok(AggregateGuardResult) combining all individual guard outcomes, or Err(GuardError) if a guard fails unexpectedly.
async def check_output(
    content: str,
    *,
    original_input: str | None = None,
    metadata: dict[str, Any] | None = None,
    parallel: bool = False
) -> Result[AggregateGuardResult, GuardError]

Run all output guards against the LLM response.

Parameters
ParameterTypeDescription
`content`strLLM response text to evaluate.
`original_input`str | NoneThe original user input for context.
`metadata`dict[str, Any] | NoneOptional request metadata (model, provider, etc.).
`parallel`boolWhether to run guards concurrently.
Returns
TypeDescription
Result[AggregateGuardResult, GuardError]Ok(AggregateGuardResult) combining all individual guard outcomes.

Payload fired after the guard pipeline finishes a full pass.

Provider for the content safety guard pipeline.

Reads GuardConfig, builds a GuardPipeline with the configured guards, and registers it as a singleton.

When GuardConfig.enabled is False, a no-op pipeline with no guards is registered (all checks pass through).

When GuardConfig.enable_llm_guards is True and an LLMClientProtocol is available in the container (resolved optionally during boot), LLM-based injection and jailbreak detectors are appended to the pipeline.

def __init__(
    config: GuardConfig | None = None,
    enable_audit_logging: bool = True,
    **kwargs: Any
) -> None
async def register(container: ContainerRegistrarProtocol) -> None

Register the guard pipeline with the DI container.

async def boot(container: ContainerResolverProtocol) -> None

Boot phase — optionally attach LLM-based guards if configured.

Resolves LLMClientProtocol from the container when GuardConfig.enable_llm_guards is True. If the client is not registered, LLM guards are silently skipped.

async def shutdown() -> None

Shutdown phase — no cleanup required for guard pipeline.

async def health_check(timeout: float = 5.0) -> HealthCheckResult

Health check — always healthy (in-process domain provider).

No external backend to ping.

Parameters
ParameterTypeDescription
`timeout`floatIgnored for in-process providers.
Returns
TypeDescription
HealthCheckResultAlways HEALTHY — no external backend to ping.

Emitted when an input guard check is triggered (blocked or flagged).

Consumed by: safety monitoring, audit, security review.


GuardProtocol that enforces a maximum input length.

Length is measured in character count. For a rough token estimate divide by 4 (GPT tokenizer average).

Parameters
ParameterTypeDescription
`max_chars`Maximum allowed character count.
`action`Action when the limit is exceeded — ``"block"`` (default) or ``"warn"``.

Example

guard = InputLengthGuard(max_chars=8000, action="block")
result = await guard.check(user_input)
if not result.passed:
return Err(InputTooLongError(len(user_input)))
def __init__(
    max_chars: int,
    action: str = 'block'
) -> None

Initialise the length guard.

Parameters
ParameterTypeDescription
`max_chars`intMaximum allowed character count.
`action`str``"block"`` or ``"warn"``.
async def check(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Check whether content exceeds the configured length limit.

Parameters
ParameterTypeDescription
`content`strInput text to measure.
`messages`list[Any] | NoneUnused — present for protocol compatibility.
`metadata`dict[str, Any] | NoneOptional metadata.
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Ok(GuardCheckResult) PASS if within limit; BLOCK or WARN if exceeded.

Emitted when an output guard check is triggered (blocked or flagged).

Consumed by: safety monitoring, audit, security review.


GuardProtocol that enforces a maximum LLM response length.
Parameters
ParameterTypeDescription
`max_chars`Maximum allowed character count in the response.
`action`Action when the limit is exceeded — ``"block"`` (default) or ``"warn"``.

Example

guard = OutputLengthGuard(max_chars=50000, action="warn")
result = await guard.check(llm_response)
def __init__(
    max_chars: int,
    action: str = 'block'
) -> None

Initialise the output length guard.

Parameters
ParameterTypeDescription
`max_chars`intMaximum allowed character count.
`action`str``"block"`` or ``"warn"``.
async def check(
    content: str,
    *,
    original_input: str | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Check whether the response exceeds the configured length.

Parameters
ParameterTypeDescription
`content`strLLM response text to measure.
`original_input`str | NoneUnused — present for protocol compatibility.
`metadata`dict[str, Any] | NoneOptional metadata.
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Ok(GuardCheckResult) PASS if within limit; BLOCK or WARN if exceeded.

Input guard that detects and optionally redacts PII.
Parameters
ParameterTypeDescription
`action```"redact"`` (default), ``"block"``, or ``"warn"``.
`entities`List of PII entity types to detect. Defaults to all supported types.

Example

guard = PIIDetector(action="redact", entities=["EMAIL", "SSN"])
result = await guard.check(user_message)
safe_content = result.redacted_content or user_message
def __init__(
    action: str = 'redact',
    entities: list[str] | None = None
) -> None

Initialise the PII detector.

Parameters
ParameterTypeDescription
`action`strAction when PII is found — ``"redact"``, ``"block"``, or ``"warn"``.
`entities`list[str] | NonePII entity types to scan for. Defaults to all types.
async def check(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Scan content for PII and apply the configured action.

Parameters
ParameterTypeDescription
`content`strInput text to scan.
`messages`list[Any] | NoneUnused — present for protocol compatibility.
`metadata`dict[str, Any] | NoneOptional metadata.
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Ok(GuardCheckResult) PASS if no PII detected; otherwise the configured action result.

Output guard that redacts PII from LLM responses.
Parameters
ParameterTypeDescription
`entities`List of PII entity types to redact. Defaults to all supported types.
`action```"redact"`` (default) or ``"block"`` (rejects any response containing PII rather than sanitising it).

Example

guard = PIIRedactor(entities=["SSN", "CREDIT_CARD", "EMAIL"])
result = await guard.check(llm_response)
safe_response = result.final_content or llm_response
def __init__(
    entities: list[str] | None = None,
    action: str = 'redact'
) -> None

Initialise the PII redactor.

Parameters
ParameterTypeDescription
`entities`list[str] | NonePII types to redact. Defaults to all types.
`action`str``"redact"`` or ``"block"``.
async def check(
    content: str,
    *,
    original_input: str | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Scan LLM response for PII and apply the configured action.

Parameters
ParameterTypeDescription
`content`strLLM response text to scan.
`original_input`str | NoneUnused — present for protocol compatibility.
`metadata`dict[str, Any] | NoneOptional metadata.
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Ok(GuardCheckResult) PASS if no PII; REDACT with sanitised content or BLOCK otherwise.

Heuristic detector for prompt injection and jailbreak attempts.

Blocks or warns on content that attempts to override system instructions, break out of a persona, or exfiltrate the system prompt.

Parameters
ParameterTypeDescription
`action`Action when injection is detected — ``"block"`` (default) or ``"warn"``.

Example

guard = PromptInjectionDetector(action="block")
result = await guard.check(user_message)
if not result.passed:
return Err(InjectionAttemptError())
def __init__(action: str = 'block') -> None

Initialise the injection detector.

Parameters
ParameterTypeDescription
`action`str``"block"`` or ``"warn"``.
async def check(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Evaluate content for prompt injection attempts.

Parameters
ParameterTypeDescription
`content`strUser-supplied text to check.
`messages`list[Any] | NoneOptional structured messages for context.
`metadata`dict[str, Any] | NoneOptional request metadata.
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Ok(GuardCheckResult) — BLOCK or WARN if injection detected, PASS otherwise.

Input guard that blocks prohibited topics.
Parameters
ParameterTypeDescription
`restricted_topics`List of topic keywords/phrases to prohibit.
`action`Action when a restricted topic is found — ``"block"`` (default) or ``"warn"``.
`whole_word`If ``True`` (default), only match whole words / phrase boundaries to reduce false positives.

Example

guard = TopicRestrictor(
restricted_topics=["weapons", "explosives", "self-harm"],
action="block",
)
result = await guard.check(user_message)
def __init__(
    restricted_topics: list[str],
    action: str = 'block',
    whole_word: bool = True
) -> None

Initialise the topic restrictor.

Parameters
ParameterTypeDescription
`restricted_topics`list[str]List of prohibited topic keywords/phrases.
`action`str``"block"`` or ``"warn"``.
`whole_word`boolEnforce word boundaries on matches.
async def check(
    content: str,
    *,
    messages: list[Any] | None = None,
    metadata: dict[str, Any] | None = None
) -> Result[GuardCheckResult, GuardError]

Check content against restricted topics.

Parameters
ParameterTypeDescription
`content`strInput text to evaluate.
`messages`list[Any] | NoneUnused — present for protocol compatibility.
`metadata`dict[str, Any] | NoneOptional metadata.
Returns
TypeDescription
Result[GuardCheckResult, GuardError]Ok(GuardCheckResult) PASS if no restricted topics found; BLOCK or WARN otherwise.

def guarded(
    input_guards: list[InputGuardProtocol] | None = None,
    output_guards: list[OutputGuardProtocol] | None = None
) -> Callable[[F], F]

Apply input and/or output safety guards to the decorated async function.

Guards are attached as metadata on the wrapper. The actual guard pipeline execution is handled by the GuardPipelineProtocol implementation resolved from the container — this decorator is a marker/configuration mechanism.

Parameters
ParameterTypeDescription
`input_guards`list[InputGuardProtocol] | NoneGuards to evaluate against string inputs before calling.
`output_guards`list[OutputGuardProtocol] | NoneGuards to evaluate against string output after calling.
Returns
TypeDescription
Callable[[F], F]A decorator that attaches guard metadata to the wrapped coroutine.

Example

@guarded( input_guards=[PromptInjectionGuard()], output_guards=[PIIFilterGuard()], ) async def chat(prompt: str) -> str: …


Raised when a guard is misconfigured at boot time.

Base exception for all guard-related errors.

Raised when the guard pipeline encounters an unrecoverable error.