Skip to content
GitHubDiscord

AI Guard (lexigram-ai-guard)

AI input/output guard pipeline for the Lexigram Framework — LLM safety and content filtering


Content safety and guardrails for the Lexigram Framework. Runs an ordered pipeline of input and output guards against every LLM call — blocking prompt injections, redacting PII, enforcing length limits, restricting topics, and optionally using an LLM classifier for advanced jailbreak detection. Zero-config usage starts with sensible defaults.

Terminal window
uv add lexigram-ai-guard
from lexigram import Application
from lexigram.di.module import Module, module
from lexigram.ai.guard import GuardModule
from lexigram.ai.guard.config import GuardConfig
@module(imports=[
GuardModule.configure(
GuardConfig(
injection_detection=True,
pii_detection=True,
pii_action="redact",
max_input_chars=8000,
)
)
])
class AppModule(Module):
pass
app = Application(modules=[AppModule])
if __name__ == "__main__":
app.run()

Zero-config usage: Call GuardModule.configure() with no arguments to use defaults.

application.yaml
ai_guard:
injection_detection: true
pii_detection: true
pii_action: "redact"
max_input_chars: 8000
restricted_topics: []
Section titled “Option 2 — Profiles + Environment Variables (recommended)”
Terminal window
export LEX_AI_GUARD__PII_ACTION=redact
# Environment variables for each field
from lexigram.ai.guard.config import GuardConfig
from lexigram.ai.guard import GuardModule
config = GuardConfig(
injection_detection=True,
pii_action="redact",
max_input_chars=8000,
restricted_topics=["violence", "adult_content"],
)
GuardModule.configure(config)
FieldDefaultEnv varDescription
enabledTrueLEX_AI_GUARD__ENABLEDMaster on/off switch
injection_detectionTrueLEX_AI_GUARD__INJECTION_DETECTIONEnable heuristic prompt injection detector
injection_action"block"LEX_AI_GUARD__INJECTION_ACTIONAction on injection: "block" or "warn"
pii_detectionTrueLEX_AI_GUARD__PII_DETECTIONEnable PII detection on user inputs
pii_action"redact"LEX_AI_GUARD__PII_ACTIONAction on PII: "redact", "block", or "warn"
pii_entities[]LEX_AI_GUARD__PII_ENTITIESPII entity types to scan
pii_redaction_outputTrueLEX_AI_GUARD__PII_REDACTION_OUTPUTApply PII redaction to LLM outputs
max_input_chars0LEX_AI_GUARD__MAX_INPUT_CHARSMaximum input character count
max_output_chars0LEX_AI_GUARD__MAX_OUTPUT_CHARSMaximum output character count
length_action"block"LEX_AI_GUARD__LENGTH_ACTIONAction when a length limit is exceeded
restricted_topics[]LEX_AI_GUARD__RESTRICTED_TOPICSTopic keywords to block
enable_llm_guardsFalseLEX_AI_GUARD__ENABLE_LLM_GUARDSUse LLM for advanced injection/jailbreak detection
guard_model"gpt-4o-mini"LEX_AI_GUARD__GUARD_MODELModel used for LLM-based guards
llm_guard_threshold0.7LEX_AI_GUARD__LLM_GUARD_THRESHOLDConfidence threshold for LLM guard action
sensitivity_level"medium"LEX_AI_GUARD__SENSITIVITY_LEVELGuard aggressiveness: "low", "medium", "high"
MethodDescription
GuardModule.configure(config)Configure with explicit config
GuardModule.stub(config)Minimal config for testing
  • Input guards: PromptInjectionDetector, PIIDetector, TopicRestrictor, InputLengthGuard
  • Output guards: PIIRedactor, OutputLengthGuard
  • LLM-based guards: Optional LLMInjectionDetector and LLMJailbreakDetector
  • Guard results: AggregateGuardResult with passed, blocked, redacted, warned states
  • @guarded decorator: Function-level guard attachment
  • Lifecycle hooks: GuardInputCheckedHook, GuardOutputCheckedHook, GuardPipelineCompletedHook
async with Application.boot(modules=[GuardModule.stub(
GuardConfig(
injection_detection=True,
injection_action="block",
)
)]) as app:
# your test code
...
FileWhat it contains
src/lexigram/ai/guard/module.pyGuardModule.configure(), .stub()
src/lexigram/ai/guard/config.pyGuardConfig
src/lexigram/ai/guard/pipeline/guard_pipeline.pyGuardPipeline orchestrator
src/lexigram/ai/guard/pipeline/result.pyGuardCheckResult, AggregateGuardResult, GuardAction
src/lexigram/ai/guard/input/injection.pyPromptInjectionDetector heuristics
src/lexigram/ai/guard/input/pii.pyPIIDetector regex patterns
src/lexigram/ai/guard/output/pii_redactor.pyPIIRedactor
src/lexigram/ai/guard/di/provider.pyGuardProvider boot and registration