API Reference
Protocols
Section titled “Protocols”GuardPipelineProtocol
Section titled “GuardPipelineProtocol”Protocol for guard pipeline execution.
Orchestrates a chain of input and/or output guards, collecting results and determining the aggregate action.
async def check_input( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardResultProtocol, GuardError]
Run all input guards against the content.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Input text to guard. |
| `messages` | list[Any] | None | Optional structured messages. |
| `metadata` | dict[str, Any] | None | Optional request metadata. |
| Type | Description |
|---|---|
| Result[GuardResultProtocol, GuardError] | Aggregate guard result. |
async def check_output( content: str, *, original_input: str | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardResultProtocol, GuardError]
Run all output guards against the content.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Output text to guard. |
| `original_input` | str | None | The original user input. |
| `metadata` | dict[str, Any] | None | Optional request metadata. |
| Type | Description |
|---|---|
| Result[GuardResultProtocol, GuardError] | Aggregate guard result. |
GuardResultProtocol
Section titled “GuardResultProtocol”Protocol for guard evaluation results.
Every guard returns a result indicating whether the content passed, was blocked, or triggered a warning.
Whether the guard check passed.
Action taken: ‘pass’, ‘block’, ‘warn’, or ‘redact’.
Name of the guard that produced this result.
Additional details about the guard evaluation.
InputGuardProtocol
Section titled “InputGuardProtocol”Protocol for input content guards.
Input guards inspect user prompts and messages before they are sent to an LLM. They can block, warn, or redact content.
GuardProtocol identifier.
async def check( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardResultProtocol, GuardError]
Evaluate input content against this guard’s rules.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | The raw text content to check. |
| `messages` | list[Any] | None | Optional structured message list for context. |
| `metadata` | dict[str, Any] | None | Optional metadata (user_id, model, etc.). |
| Type | Description |
|---|---|
| Result[GuardResultProtocol, GuardError] | A GuardCheckResult indicating pass/block/warn/redact. |
OutputGuardProtocol
Section titled “OutputGuardProtocol”Protocol for output content guards.
Output guards inspect LLM responses before they are returned to the caller. They can block, warn, or redact content.
GuardProtocol identifier.
async def check( content: str, *, original_input: str | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardResultProtocol, GuardError]
Evaluate output content against this guard’s rules.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | The LLM response text to check. |
| `original_input` | str | None | The original user input for context. |
| `metadata` | dict[str, Any] | None | Optional metadata (model, provider, etc.). |
| Type | Description |
|---|---|
| Result[GuardResultProtocol, GuardError] | A GuardCheckResult indicating pass/block/warn/redact. |
Classes
Section titled “Classes”AbstractInputGuard
Section titled “AbstractInputGuard”Base class for all input content guards.
Subclasses implement check to evaluate a piece of input content and return a GuardCheckResult.
| Parameter | Type | Description |
|---|---|---|
| `action` | Default action to take when this guard triggers (``"block"``, ``"warn"``, or ``"redact"``). Not all guards support all actions. |
Initialise the guard with a default action.
| Parameter | Type | Description |
|---|---|---|
| `action` | str | Action taken when the guard triggers. |
GuardProtocol identifier derived from the class name.
Configured action for this guard.
async def check( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Evaluate the content and return a result.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Raw text content to evaluate. |
| `messages` | list[Any] | None | Optional structured message list for context. |
| `metadata` | dict[str, Any] | None | Optional metadata (user_id, model, etc.). |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Result[GuardCheckResult, GuardError] indicating the outcome. |
AbstractOutputGuard
Section titled “AbstractOutputGuard”Base class for all output content guards.
Subclasses implement check to evaluate LLM response content and return a GuardCheckResult.
| Parameter | Type | Description |
|---|---|---|
| `action` | Default action to take when this guard triggers (``"block"``, ``"warn"``, or ``"redact"``). |
Initialise the guard with a default action.
| Parameter | Type | Description |
|---|---|---|
| `action` | str | Action taken when the guard triggers. |
GuardProtocol identifier derived from the class name.
Configured action for this guard.
async def check( content: str, *, original_input: str | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Evaluate the LLM response and return a result.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | LLM response text to evaluate. |
| `original_input` | str | None | The original user input for context. |
| `metadata` | dict[str, Any] | None | Optional metadata (model, provider, etc.). |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Result[GuardCheckResult, GuardError] indicating the outcome. |
AggregateGuardResult
Section titled “AggregateGuardResult”Aggregate result from running multiple guards in a pipeline.
Combines results from all guards into a single verdict. The aggregate action is the most severe action from any guard (BLOCK > REDACT > WARN > PASS).
Whether any guard triggered a block action.
Whether any guard redacted content.
Whether any guard emitted a warning.
property blocking_result() -> GuardCheckResult | None
Return the first blocking result, or None if none blocked.
def from_results( cls, results: list[GuardCheckResult], original_content: str ) -> AggregateGuardResult
Build an aggregate result from a list of individual results.
The aggregate action uses the most severe outcome. If multiple guards redacted content, only the last redaction is applied (guards should be ordered most-restrictive-first in the pipeline).
| Parameter | Type | Description |
|---|---|---|
| `results` | list[GuardCheckResult] | Individual guard check results. |
| `original_content` | str | Original input content before any redaction. |
| Type | Description |
|---|---|
| AggregateGuardResult | Aggregated result. |
GuardAction
Section titled “GuardAction”Action to take on a guard result.
GuardCheckResult
Section titled “GuardCheckResult”Result of a single guard evaluation.
Immutable result produced by one InputGuardProtocol or OutputGuardProtocol check.
def allow( cls, guard_name: str, **details: Any ) -> GuardCheckResult
Create a passing guard result.
| Parameter | Type | Description |
|---|---|---|
| `guard_name` | str | GuardProtocol identifier. **details: Optional extra diagnostics. |
| Type | Description |
|---|---|
| GuardCheckResult | GuardCheckResult with action=PASS. |
def block( cls, guard_name: str, reason: str, **details: Any ) -> GuardCheckResult
Create a blocking guard result.
| Parameter | Type | Description |
|---|---|---|
| `guard_name` | str | GuardProtocol identifier. |
| `reason` | str | Human-readable explanation of why the content was blocked. **details: Optional extra diagnostics. |
| Type | Description |
|---|---|
| GuardCheckResult | GuardCheckResult with action=BLOCK. |
def warn( cls, guard_name: str, reason: str, **details: Any ) -> GuardCheckResult
Create a warning guard result.
| Parameter | Type | Description |
|---|---|---|
| `guard_name` | str | GuardProtocol identifier. |
| `reason` | str | Human-readable explanation of why a warning was emitted. **details: Optional extra diagnostics. |
| Type | Description |
|---|---|
| GuardCheckResult | GuardCheckResult with action=WARN (passed=True). |
def redact( cls, guard_name: str, redacted_content: str, reason: str, **details: Any ) -> GuardCheckResult
Create a redacting guard result.
| Parameter | Type | Description |
|---|---|---|
| `guard_name` | str | GuardProtocol identifier. |
| `redacted_content` | str | The sanitized version of the original content. |
| `reason` | str | Human-readable explanation of what was redacted. **details: Optional extra diagnostics. |
| Type | Description |
|---|---|
| GuardCheckResult | GuardCheckResult with action=REDACT (passed=True). |
GuardConfig
Section titled “GuardConfig”Configuration for the content safety guard pipeline.
Controls which guards are enabled by default and their behaviour. Guards can be further customised programmatically via GuardPipeline.
Attributes:
enabled: Master switch — when False the guard pipeline is
bypassed entirely.
injection_detection: Enable the prompt injection detector.
injection_action: Action on detected injection ("block"/"warn").
pii_detection: Enable PII detection on inputs.
pii_action: Action on detected PII ("redact"/"block"/"warn").
pii_entities: PII entity types to detect. Empty list means all types.
pii_redaction_output: Enable PII redaction on LLM outputs.
max_input_chars: Maximum allowed input length in characters.
0 disables the length guard.
max_output_chars: Maximum allowed output length in characters.
0 disables the output length guard.
length_action: Action when length limit exceeded ("block"/"warn").
restricted_topics: Topics to block in user inputs.
GuardInputCheckedHook
Section titled “GuardInputCheckedHook”Payload fired after an input guard evaluates content.
GuardModule
Section titled “GuardModule”Module for content safety guard pipeline.
Registers the GuardProvider which builds and wires the configured guard pipeline into the container.
Usage
app = LexigramApplication( modules=[..., GuardModule()],)Or with custom config
app = LexigramApplication( modules=[ ..., GuardModule( config=GuardConfig( injection_detection=True, pii_action="redact", max_input_chars=8000, ) ) ])def configure( cls, config: GuardConfig | None = None, enable_audit_logging: bool = True, **kwargs: Any ) -> DynamicModule
Create a GuardModule with explicit configuration.
| Parameter | Type | Description |
|---|---|---|
| `config` | GuardConfig | None | GuardConfig or ``None`` for defaults. |
| `enable_audit_logging` | bool | Emit structured audit log entries for every guard decision (allow, block, redact). Defaults to ``True``; set to ``False`` to reduce log volume in high-throughput environments. **kwargs: Additional keyword arguments forwarded to GuardProvider. |
| Type | Description |
|---|---|
| DynamicModule | A DynamicModule descriptor. |
def stub( cls, config: GuardConfig | None = None ) -> DynamicModule
Create a GuardModule suitable for unit and integration testing.
Uses a pass-through (allow-all) guard pipeline with no external API calls. Audit logging is disabled by default to keep test output clean.
| Parameter | Type | Description |
|---|---|---|
| `config` | GuardConfig | None | Optional GuardConfig override. Uses safe test defaults when ``None``. |
| Type | Description |
|---|---|
| DynamicModule | A DynamicModule descriptor. |
GuardOutputCheckedHook
Section titled “GuardOutputCheckedHook”Payload fired after an output guard evaluates content.
GuardPipeline
Section titled “GuardPipeline”Ordered chain of input and output guards.
Guards are evaluated in order. For input, content flows through each guard; if a guard redacts content, the redacted version is passed to subsequent guards. A BLOCK stops evaluation immediately.
| Parameter | Type | Description |
|---|---|---|
| `input_guards` | Guards to run on user inputs before sending to LLM. | |
| `output_guards` | Guards to run on LLM outputs before returning to caller. |
Example
pipeline = GuardPipeline( input_guards=[ PromptInjectionDetector(action="block"), PIIDetector(action="redact"), ], output_guards=[ PIIRedactor(entities=["SSN", "CREDIT_CARD"]), LengthGuard(max_chars=10000, action="block"), ],)
result = await pipeline.check_input(user_message)if result.blocked: return Err(GuardViolationError(result))safe_input = result.final_content or user_messagedef __init__( input_guards: list[InputGuardProtocol] | None = None, output_guards: list[OutputGuardProtocol] | None = None ) -> None
Initialise the guard pipeline.
| Parameter | Type | Description |
|---|---|---|
| `input_guards` | list[InputGuardProtocol] | None | Ordered list of input guards to apply. |
| `output_guards` | list[OutputGuardProtocol] | None | Ordered list of output guards to apply. |
def add_input_guard(guard: InputGuardProtocol) -> None
Append an input guard to the pipeline.
| Parameter | Type | Description |
|---|---|---|
| `guard` | InputGuardProtocol | GuardProtocol to add. |
def add_output_guard(guard: OutputGuardProtocol) -> None
Append an output guard to the pipeline.
| Parameter | Type | Description |
|---|---|---|
| `guard` | OutputGuardProtocol | GuardProtocol to add. |
async def check_input( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None, parallel: bool = False ) -> Result[AggregateGuardResult, GuardError]
Run all input guards against the content.
Guards are applied in order. Redacted content is forwarded to
subsequent guards. A BLOCK terminates evaluation immediately.
If parallel is True, all guards are run concurrently and redaction
chaining is disabled (each operates on the original content).
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Raw input text to evaluate. |
| `messages` | list[Any] | None | Optional structured message list for context. |
| `metadata` | dict[str, Any] | None | Optional request metadata (user_id, model, etc.). |
| `parallel` | bool | Whether to run guards concurrently with asyncio.gather. |
| Type | Description |
|---|---|
| Result[AggregateGuardResult, GuardError] | Ok(AggregateGuardResult) combining all individual guard outcomes, or Err(GuardError) if a guard fails unexpectedly. |
async def check_output( content: str, *, original_input: str | None = None, metadata: dict[str, Any] | None = None, parallel: bool = False ) -> Result[AggregateGuardResult, GuardError]
Run all output guards against the LLM response.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | LLM response text to evaluate. |
| `original_input` | str | None | The original user input for context. |
| `metadata` | dict[str, Any] | None | Optional request metadata (model, provider, etc.). |
| `parallel` | bool | Whether to run guards concurrently. |
| Type | Description |
|---|---|
| Result[AggregateGuardResult, GuardError] | Ok(AggregateGuardResult) combining all individual guard outcomes. |
GuardPipelineCompletedHook
Section titled “GuardPipelineCompletedHook”Payload fired after the guard pipeline finishes a full pass.
GuardProvider
Section titled “GuardProvider”Provider for the content safety guard pipeline.
Reads GuardConfig, builds a GuardPipeline with the configured guards, and registers it as a singleton.
When GuardConfig.enabled is False, a no-op pipeline with no
guards is registered (all checks pass through).
When GuardConfig.enable_llm_guards is True and an
LLMClientProtocol is available in the container (resolved
optionally during boot), LLM-based injection and jailbreak
detectors are appended to the pipeline.
def __init__( config: GuardConfig | None = None, enable_audit_logging: bool = True, **kwargs: Any ) -> None
async def register(container: ContainerRegistrarProtocol) -> None
Register the guard pipeline with the DI container.
async def boot(container: ContainerResolverProtocol) -> None
Boot phase — optionally attach LLM-based guards if configured.
Resolves LLMClientProtocol from the container when
GuardConfig.enable_llm_guards is True. If the client is
not registered, LLM guards are silently skipped.
Shutdown phase — no cleanup required for guard pipeline.
Health check — always healthy (in-process domain provider).
No external backend to ping.
| Parameter | Type | Description |
|---|---|---|
| `timeout` | float | Ignored for in-process providers. |
| Type | Description |
|---|---|
| HealthCheckResult | Always HEALTHY — no external backend to ping. |
InputGuardTriggeredEvent
Section titled “InputGuardTriggeredEvent”Emitted when an input guard check is triggered (blocked or flagged).
Consumed by: safety monitoring, audit, security review.
InputLengthGuard
Section titled “InputLengthGuard”GuardProtocol that enforces a maximum input length.
Length is measured in character count. For a rough token estimate divide by 4 (GPT tokenizer average).
| Parameter | Type | Description |
|---|---|---|
| `max_chars` | Maximum allowed character count. | |
| `action` | Action when the limit is exceeded — ``"block"`` (default) or ``"warn"``. |
Example
guard = InputLengthGuard(max_chars=8000, action="block")result = await guard.check(user_input)if not result.passed: return Err(InputTooLongError(len(user_input)))Initialise the length guard.
| Parameter | Type | Description |
|---|---|---|
| `max_chars` | int | Maximum allowed character count. |
| `action` | str | ``"block"`` or ``"warn"``. |
async def check( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Check whether content exceeds the configured length limit.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Input text to measure. |
| `messages` | list[Any] | None | Unused — present for protocol compatibility. |
| `metadata` | dict[str, Any] | None | Optional metadata. |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Ok(GuardCheckResult) PASS if within limit; BLOCK or WARN if exceeded. |
OutputGuardTriggeredEvent
Section titled “OutputGuardTriggeredEvent”Emitted when an output guard check is triggered (blocked or flagged).
Consumed by: safety monitoring, audit, security review.
OutputLengthGuard
Section titled “OutputLengthGuard”GuardProtocol that enforces a maximum LLM response length.
| Parameter | Type | Description |
|---|---|---|
| `max_chars` | Maximum allowed character count in the response. | |
| `action` | Action when the limit is exceeded — ``"block"`` (default) or ``"warn"``. |
Example
guard = OutputLengthGuard(max_chars=50000, action="warn")result = await guard.check(llm_response)Initialise the output length guard.
| Parameter | Type | Description |
|---|---|---|
| `max_chars` | int | Maximum allowed character count. |
| `action` | str | ``"block"`` or ``"warn"``. |
async def check( content: str, *, original_input: str | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Check whether the response exceeds the configured length.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | LLM response text to measure. |
| `original_input` | str | None | Unused — present for protocol compatibility. |
| `metadata` | dict[str, Any] | None | Optional metadata. |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Ok(GuardCheckResult) PASS if within limit; BLOCK or WARN if exceeded. |
PIIDetector
Section titled “PIIDetector”Input guard that detects and optionally redacts PII.
| Parameter | Type | Description |
|---|---|---|
| `action` | ``"redact"`` (default), ``"block"``, or ``"warn"``. | |
| `entities` | List of PII entity types to detect. Defaults to all supported types. |
Example
guard = PIIDetector(action="redact", entities=["EMAIL", "SSN"])result = await guard.check(user_message)safe_content = result.redacted_content or user_messageInitialise the PII detector.
| Parameter | Type | Description |
|---|---|---|
| `action` | str | Action when PII is found — ``"redact"``, ``"block"``, or ``"warn"``. |
| `entities` | list[str] | None | PII entity types to scan for. Defaults to all types. |
async def check( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Scan content for PII and apply the configured action.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Input text to scan. |
| `messages` | list[Any] | None | Unused — present for protocol compatibility. |
| `metadata` | dict[str, Any] | None | Optional metadata. |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Ok(GuardCheckResult) PASS if no PII detected; otherwise the configured action result. |
PIIRedactor
Section titled “PIIRedactor”Output guard that redacts PII from LLM responses.
| Parameter | Type | Description |
|---|---|---|
| `entities` | List of PII entity types to redact. Defaults to all supported types. | |
| `action` | ``"redact"`` (default) or ``"block"`` (rejects any response containing PII rather than sanitising it). |
Example
guard = PIIRedactor(entities=["SSN", "CREDIT_CARD", "EMAIL"])result = await guard.check(llm_response)safe_response = result.final_content or llm_responseInitialise the PII redactor.
| Parameter | Type | Description |
|---|---|---|
| `entities` | list[str] | None | PII types to redact. Defaults to all types. |
| `action` | str | ``"redact"`` or ``"block"``. |
async def check( content: str, *, original_input: str | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Scan LLM response for PII and apply the configured action.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | LLM response text to scan. |
| `original_input` | str | None | Unused — present for protocol compatibility. |
| `metadata` | dict[str, Any] | None | Optional metadata. |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Ok(GuardCheckResult) PASS if no PII; REDACT with sanitised content or BLOCK otherwise. |
PromptInjectionDetector
Section titled “PromptInjectionDetector”Heuristic detector for prompt injection and jailbreak attempts.
Blocks or warns on content that attempts to override system instructions, break out of a persona, or exfiltrate the system prompt.
| Parameter | Type | Description |
|---|---|---|
| `action` | Action when injection is detected — ``"block"`` (default) or ``"warn"``. |
Example
guard = PromptInjectionDetector(action="block")result = await guard.check(user_message)if not result.passed: return Err(InjectionAttemptError())Initialise the injection detector.
| Parameter | Type | Description |
|---|---|---|
| `action` | str | ``"block"`` or ``"warn"``. |
async def check( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Evaluate content for prompt injection attempts.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | User-supplied text to check. |
| `messages` | list[Any] | None | Optional structured messages for context. |
| `metadata` | dict[str, Any] | None | Optional request metadata. |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Ok(GuardCheckResult) — BLOCK or WARN if injection detected, PASS otherwise. |
TopicRestrictor
Section titled “TopicRestrictor”Input guard that blocks prohibited topics.
| Parameter | Type | Description |
|---|---|---|
| `restricted_topics` | List of topic keywords/phrases to prohibit. | |
| `action` | Action when a restricted topic is found — ``"block"`` (default) or ``"warn"``. | |
| `whole_word` | If ``True`` (default), only match whole words / phrase boundaries to reduce false positives. |
Example
guard = TopicRestrictor( restricted_topics=["weapons", "explosives", "self-harm"], action="block",)result = await guard.check(user_message)def __init__( restricted_topics: list[str], action: str = 'block', whole_word: bool = True ) -> None
Initialise the topic restrictor.
| Parameter | Type | Description |
|---|---|---|
| `restricted_topics` | list[str] | List of prohibited topic keywords/phrases. |
| `action` | str | ``"block"`` or ``"warn"``. |
| `whole_word` | bool | Enforce word boundaries on matches. |
async def check( content: str, *, messages: list[Any] | None = None, metadata: dict[str, Any] | None = None ) -> Result[GuardCheckResult, GuardError]
Check content against restricted topics.
| Parameter | Type | Description |
|---|---|---|
| `content` | str | Input text to evaluate. |
| `messages` | list[Any] | None | Unused — present for protocol compatibility. |
| `metadata` | dict[str, Any] | None | Optional metadata. |
| Type | Description |
|---|---|
| Result[GuardCheckResult, GuardError] | Ok(GuardCheckResult) PASS if no restricted topics found; BLOCK or WARN otherwise. |
Functions
Section titled “Functions”guarded
Section titled “guarded”
def guarded( input_guards: list[InputGuardProtocol] | None = None, output_guards: list[OutputGuardProtocol] | None = None ) -> Callable[[F], F]
Apply input and/or output safety guards to the decorated async function.
Guards are attached as metadata on the wrapper. The actual guard pipeline execution is handled by the GuardPipelineProtocol implementation resolved from the container — this decorator is a marker/configuration mechanism.
| Parameter | Type | Description |
|---|---|---|
| `input_guards` | list[InputGuardProtocol] | None | Guards to evaluate against string inputs before calling. |
| `output_guards` | list[OutputGuardProtocol] | None | Guards to evaluate against string output after calling. |
| Type | Description |
|---|---|
| Callable[[F], F] | A decorator that attaches guard metadata to the wrapped coroutine. |
Example
@guarded( input_guards=[PromptInjectionGuard()], output_guards=[PIIFilterGuard()], ) async def chat(prompt: str) -> str: …
Exceptions
Section titled “Exceptions”GuardConfigurationError
Section titled “GuardConfigurationError”Raised when a guard is misconfigured at boot time.
GuardError
Section titled “GuardError”Base exception for all guard-related errors.
GuardPipelineError
Section titled “GuardPipelineError”Raised when the guard pipeline encounters an unrecoverable error.