API Reference
Protocols
Section titled “Protocols”CacheStatusHandler
Section titled “CacheStatusHandler”Protocol for cache status handlers.
Implement this protocol to create custom handlers for cache status events. Handlers are registered with the cache status registry and called when corresponding cache operations occur.
def can_handle(status: CacheStatus) -> bool
Check if this handler can handle the status.
| Parameter | Type | Description |
|---|---|---|
| `status` | CacheStatus | The cache status to check. |
| Type | Description |
|---|---|
| bool | True if this handler can handle the status. |
async def record( metrics: CacheMetrics, count: int ) -> None
Record metrics for the status.
| Parameter | Type | Description |
|---|---|---|
| `metrics` | CacheMetrics | The metrics object to update. |
| `count` | int | The number of operations to record. |
Classes
Section titled “Classes”AutoRenewingLock
Section titled “AutoRenewingLock”Distributed lock with automatic TTL renewal.
Prevents lock expiration during long operations by automatically extending the TTL in the background.
def __init__( redis: Any, key: str, lock_id: str, ttl: int, auto_renew: bool = True, renew_interval: int | None = None ) -> None
Initialize auto-renewing lock.
| Parameter | Type | Description |
|---|---|---|
| `redis` | Any | Redis client |
| `key` | str | Lock key |
| `lock_id` | str | Unique lock identifier (owner ID) |
| `ttl` | int | Lock TTL in seconds |
| `auto_renew` | bool | Enable automatic renewal |
| `renew_interval` | int | None | Renewal interval (default: ttl / 3) |
Start background task to auto-renew lock.
Stop background renewal task.
Release lock and stop auto-renewal.
BackendType
Section titled “BackendType”Supported cache backend types.
CacheBackendConfig
Section titled “CacheBackendConfig”Unified (flattened-union) configuration for any single cache backend.
This class uses the flattened-union pattern: all backend-specific
fields live in one dataclass and are partitioned by the type field.
Fields that do not belong to the active backend type are silently
ignored by the respective backend implementation.
When to use
Section titled “When to use”Prefer this class when loading backend configuration from environment variables or a flat config file (e.g. TOML/YAML with no nested sections), where per-backend types cannot be selected at parse time.
For strongly-typed, up-front configuration prefer the dedicated classes: MemoryBackendConfig, RedisBackendConfig, or MemcachedBackendConfig.
Field partitioning
Section titled “Field partitioning”+-------------------------+----------+----------+-----------+ | Field | memory | redis | memcached | +=========================+==========+==========+===========+ | name, type, default, | ✓ | ✓ | ✓ | | enabled, default_ttl, | | | | | key_prefix, | | | | | enable_metrics, | | | | | health_check_interval | | | | +-------------------------+----------+----------+-----------+ | max_size, | ✓ | | | | cleanup_interval | | | | +-------------------------+----------+----------+-----------+ | redis_host/port/db/ | | ✓ | | | password/url/ssl/ | | | | | pool_size | | | | +-------------------------+----------+----------+-----------+ | memcached_host/port/ | | | ✓ | | servers | | | | +-------------------------+----------+----------+-----------+
Attributes: name: Unique name for this backend. type: Backend type (memory, redis, memcached). default: Whether this is the default backend. enabled: Whether this backend is enabled. default_ttl: Default TTL in seconds. key_prefix: Key prefix for namespace isolation. enable_metrics: Enable metrics collection. health_check_interval: Health check interval in seconds.
def build_redis_url() -> CacheBackendConfig
Build Redis URL from components if not provided.
def build_memcached_servers() -> CacheBackendConfig
Build Memcached server list if not provided.
CacheConfig
Section titled “CacheConfig”Top-level configuration for Lexigram Cache.
Attributes: name: Configuration name (default: “cache”) version: Config version enabled: Whether cache is enabled backends: List of cache backend configurations service: Cache service settings environment: Environment name debug: Debug mode
Block insecure cache configurations in production.
def validate_default_backend( cls, v: list[CacheBackendConfig] ) -> list[CacheBackendConfig]
Ensure exactly one default backend is specified.
def validate_unique_names( cls, v: list[CacheBackendConfig] ) -> list[CacheBackendConfig]
Ensure backend names are unique.
def get_default_backend() -> CacheBackendConfig | None
Get the default backend configuration.
| Type | Description |
|---|---|
| CacheBackendConfig | None | The first backend marked as default, or ``None`` if none exists. |
def get_backend(name: str) -> CacheBackendConfig | None
Get a backend configuration by name.
| Parameter | Type | Description |
|---|---|---|
| `name` | str | The backend name to look up. |
| Type | Description |
|---|---|
| CacheBackendConfig | None | Matching CacheBackendConfig, or ``None`` if not found. |
Return the provider class for this config.
| Type | Description |
|---|---|
| type | The CacheProvider class. |
CacheConnectedEvent
Section titled “CacheConnectedEvent”Cache backend connection established.
CacheConnectedHook
Section titled “CacheConnectedHook”Payload fired when the cache backend establishes its connection.
Attributes:
backend: Identifier of the backend that connected (e.g. "redis").
CacheDecorator
Section titled “CacheDecorator”Advanced caching decorator with multiple strategies.
Provides a class-based decorator with more configuration options and caching strategies.
def __init__( service: CacheService, strategy: str = 'remember', key_prefix: str = '', ttl: int | None = None, backend: str | None = None, protect: bool = True, condition: Callable[[Any], bool] | None = None )
Initialize the cache decorator.
| Parameter | Type | Description |
|---|---|---|
| `service` | CacheService | CacheService instance |
| `strategy` | str | Caching strategy ("remember", "cache", "conditional") |
| `key_prefix` | str | Prefix for cache keys |
| `ttl` | int | None | Time-to-live in seconds |
| `backend` | str | None | Backend name |
| `protect` | bool | Enable stampede protection |
| `condition` | Callable[[Any], bool] | None | Condition function for conditional caching |
CacheDisconnectedHook
Section titled “CacheDisconnectedHook”Payload fired when the cache backend disconnects or is shut down.
Attributes: backend: Identifier of the backend that disconnected.
CacheEntry
Section titled “CacheEntry”Cache entry with metadata for stampede protection.
Attributes: value: Cached value. cached_at: When value was cached. expires_at: When value expires.
CacheEntryEvictedHook
Section titled “CacheEntryEvictedHook”Payload fired when a cache entry is evicted (TTL expiry or explicit removal).
Attributes: key: The cache key that was evicted. backend: Identifier of the backend that performed the eviction.
CacheEvictedEvent
Section titled “CacheEvictedEvent”Cache entry was evicted.
CacheHealthResult
Section titled “CacheHealthResult”
CacheHitEvent
Section titled “CacheHitEvent”Cache lookup resulted in a hit.
CacheItem
Section titled “CacheItem”Represents a cached item with metadata.
CacheMetrics
Section titled “CacheMetrics”Cache performance metrics with atomic updates.
async def record( status: CacheStatus, latency: float = 0.0, count: int = 1 ) -> None
Atomic record helper.
Convenience method to record hits.
Convenience method to record misses.
Convenience method to record sets.
Convenience method to record deletes.
Convenience method to record errors.
CacheMissEvent
Section titled “CacheMissEvent”Cache lookup resulted in a miss.
CacheModule
Section titled “CacheModule”Redis and in-memory cache backends with stampede protection.
Call configure to register a configured CacheProvider and expose CacheBackendProtocol for injection.
Usage
from lexigram.cache.config import CacheConfig
@module( imports=[CacheModule.configure(CacheConfig(...))])class AppModule(Module): passdef configure( cls, config: Any | None = None ) -> DynamicModule
Create a CacheModule with explicit configuration.
| Parameter | Type | Description |
|---|---|---|
| `config` | Any | None | CacheConfig or ``None`` for framework defaults. |
| Type | Description |
|---|---|
| DynamicModule | A DynamicModule descriptor. |
def stub(cls) -> DynamicModule
Return an in-memory CacheModule for unit testing.
Uses an in-memory cache backend with no external Redis connection.
SemanticCacheProtocol is not registered (not available without
embedding client and vector index).
| Type | Description |
|---|---|
| DynamicModule | A DynamicModule backed by in-memory cache storage. |
CacheOperationConfig
Section titled “CacheOperationConfig”Internal configuration for cache operations. Shared across backends.
Attributes: default_ttl: Default time-to-live in seconds. max_memory: Maximum memory usage in bytes. key_prefix: Prefix for all cache keys. enable_metrics: Whether to enable metrics collection. serializer_type: Type of serializer to use (json, pickle, msgpack).
Create a prefixed cache key.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The raw cache key. |
| Type | Description |
|---|---|
| str | Key with the configured prefix applied, or the original key if no prefix is set. |
Remove prefix from a cache key.
| Parameter | Type | Description |
|---|---|---|
| `prefixed_key` | str | A key that may carry the configured prefix. |
| Type | Description |
|---|---|
| str | The key with the prefix removed, or the original key unchanged. |
CacheProvider
Section titled “CacheProvider”Lexigram Cache provider for lexigram integration.
Integrates cache services with lexigram’s provider system, managing lifecycle, configuration, and service registration.
Initialize the cache provider.
| Parameter | Type | Description |
|---|---|---|
| `config` | CacheConfig | None | Optional cache configuration. When provided the provider is configured immediately (equivalent to calling configure). |
def from_config( cls, config: CacheConfig, **context: Any ) -> CacheProvider
Create a CacheProvider from a CacheConfig.
The provider is created and immediately configured with the config dict.
Configure the provider with cache settings.
async def register(container: ContainerRegistrarProtocol) -> None
Register cache services with the container.
Only binds factories — no I/O. All real initialization happens in boot().
async def boot(container: ContainerResolverProtocol) -> None
Start the cache provider.
All real I/O initialization (backend connections, protection setup, service wiring) happens here, after the container is frozen.
| Parameter | Type | Description |
|---|---|---|
| `container` | ContainerResolverProtocol | The DI container provided by the framework. |
Queue cache repository observation wiring for provider boot.
| Parameter | Type | Description |
|---|---|---|
| `cache_repository` | Any | Cache repository that exposes ``observe()``. |
| `repository` | Any | Source repository whose mutations should invalidate cache. |
Shutdown the cache provider and cleanup resources.
property cache() -> CacheService
Get the default cache service.
Check provider health.
def get_service(backend_name: str | None = None) -> CacheService
Get a cache service by backend name.
| Parameter | Type | Description |
|---|---|---|
| `backend_name` | str | None | Name of the backend, uses default if None |
| Type | Description |
|---|---|
| CacheService | CacheService instance |
| Exception | Description |
|---|---|
| ValueError | If backend not found |
Get a backend instance by name.
| Parameter | Type | Description |
|---|---|---|
| `backend_name` | str | None | Name of the backend, uses default if None |
| Type | Description |
|---|---|
| Any | Backend instance |
| Exception | Description |
|---|---|
| ValueError | If backend not found |
def get_default_service() -> CacheService
Get the default cache service.
| Type | Description |
|---|---|
| CacheService | Default CacheService instance |
Return comprehensive health status of all cache services and backends.
| Type | Description |
|---|---|
| HealthCheckResult | Structured HealthCheckResult. |
Return comprehensive metrics from all registered cache services.
| Type | Description |
|---|---|
| dict[str, Any] | Metrics dictionary with per-service statistics. |
CacheResult
Section titled “CacheResult”Result of a cache operation.
def hit( cls, value: T, key: str | None = None, latency: float | None = None, backend: str | None = None ) -> CacheResult[T]
def miss( cls, key: str | None = None, latency: float | None = None ) -> CacheResult[T]
CacheService
Section titled “CacheService”High-level cache service with advanced features.
Provides a unified interface for caching operations across backend implementations (memory, Redis, Memcached). Stampede protection, metrics tracking, tag-based invalidation, and batch operations are composed in via mixin inheritance.
Attributes: provider: The cache backend provider. config: Configuration for cache operations.
def __init__( provider: CacheProvider | None = None, config: CacheOperationConfig | None = None, protection: StampedeProtectedCache | None = None, backend: CacheBackendProtocol | None = None, ctx: Context | None = None, max_tags: int = 10000 )
Initialize the cache service.
| Parameter | Type | Description |
|---|---|---|
| `provider` | CacheProvider | None | Cache provider for backend management (optional) |
| `config` | CacheOperationConfig | None | Service-level configuration |
| `protection` | StampedeProtectedCache | None | Cache stampede protection instance |
| `backend` | CacheBackendProtocol | None | Direct backend instance (optional, for standalone usage) |
| `ctx` | Context | None | Optional context for request-scoped cache keys |
| `max_tags` | int | Maximum number of tags tracked in the in-memory index. When the cap is reached, the oldest tag (by insertion/access order) is evicted. Defaults to 10 000. |
Close the cache service, releasing any background resources.
async def get( key: str, backend: str | None = None, default: Any = None, request_scoped: bool = False ) -> Any
Get a value from cache with error handling.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key |
| `backend` | str | None | Backend name (uses default if None) |
| `default` | Any | Default value if key not found |
| Type | Description |
|---|---|
| Any | Cached value or default |
Set a value in cache with error handling.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key |
| `value` | Any | Value to cache |
| `ttl` | int | None | Time-to-live in seconds |
| `backend` | str | None | Backend name (uses default if None) |
| Type | Description |
|---|---|
| bool | True if successful, False otherwise |
Delete a value from cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key |
| `backend` | str | None | Backend name (uses default if None) |
| Type | Description |
|---|---|
| bool | True if successful, False otherwise |
Delete all keys matching a glob-style pattern.
Delegates to the backend’s delete_pattern implementation.
Redis uses SCAN + DEL; Memory iterates its in-process store.
The namespace prefix (if any) is applied by the backend itself.
| Parameter | Type | Description |
|---|---|---|
| `pattern` | str | Glob pattern, e.g. ``"pet:list:*"``. |
| `backend` | str | None | Named backend to use (uses default if ``None``). |
| Type | Description |
|---|---|
| int | Number of keys deleted, or 0 on error. |
Check if a key exists in cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key |
| `backend` | str | None | Backend name (uses default if None) |
| Type | Description |
|---|---|
| bool | True if key exists, False otherwise |
Clear all values from cache.
| Parameter | Type | Description |
|---|---|---|
| `backend` | str | None | Backend name (uses default if None) |
| Type | Description |
|---|---|
| bool | True if successful, False otherwise |
Get comprehensive health status of the cache service.
| Type | Description |
|---|---|
| HealthCheckResult | Structured health check result. |
Get service performance metrics.
| Type | Description |
|---|---|
| dict[str, int] | Dictionary of metrics |
Reset performance metrics.
async def get_typed( key: str, type_: type[_T], backend: str | None = None, default: _T | None = None ) -> _T | None
Get a value from cache and cast it to the requested type.
Unlike get, which returns Any, this method provides
compile-time type safety. The stored value is returned as-is if it
is already an instance of type_; otherwise the raw value is passed
to type_(**raw) (for dict-like values) or type_(raw) as a
fallback.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key. |
| `type_` | type[_T] | The expected Python type of the cached value. |
| `backend` | str | None | Backend name (uses default if ``None``). |
| `default` | _T | None | Value returned when the key is absent or deserialisation fails. |
| Type | Description |
|---|---|
| _T | None | The cached value cast to *type_*, or *default*. |
CacheServiceConfig
Section titled “CacheServiceConfig”Configuration for the cache service layer.
Controls service-level behavior and features like stampede protection, metrics, and circuit breaker.
Attributes: enable_protection: Enable cache stampede protection. enable_metrics: Enable service-level metrics. enable_health_checks: Enable health checks. protection_lock_ttl: Lock TTL for stampede protection. protection_max_wait: Max wait time for locks. protection_retry_interval: Retry interval for locks. default_backend: Name of the default backend. circuit_breaker_enabled: Enable circuit breaker pattern. circuit_breaker_threshold: Failure threshold for circuit breaker. default_serializer: Default serializer type.
CacheStats
Section titled “CacheStats”
CacheStatus
Section titled “CacheStatus”Status values for cache operations.
CompressingSerializer
Section titled “CompressingSerializer”AsyncStringSerializerProtocol with automatic compression for large values.
Compresses values above size threshold to save memory.
def __init__( compression_threshold: int = const.DEFAULT_COMPRESSION_THRESHOLD, compression_level: int = const.DEFAULT_COMPRESSION_LEVEL ) -> None
Initialize serializer.
| Parameter | Type | Description |
|---|---|---|
| `compression_threshold` | int | Compress values larger than this (bytes) |
| `compression_level` | int | Compression level (1-9, higher = more compression) |
Serialize and optionally compress value.
| Parameter | Type | Description |
|---|---|---|
| `value` | Any | Value to serialize |
| Type | Description |
|---|---|
| str | Serialized string (possibly compressed) |
Deserialize and decompress if needed.
| Parameter | Type | Description |
|---|---|---|
| `value` | str | Serialized string |
| Type | Description |
|---|---|
| Any | Original value |
| Exception | Description |
|---|---|
| SerializationError | If deserialization fails |
Get serializer statistics.
| Type | Description |
|---|---|
| dict[str, Any] | Stats dict |
CostAwareCacheDecision
Section titled “CostAwareCacheDecision”Cost-aware cache hit/miss decision function.
Implements a simple heuristic that balances accuracy (similarity score) against the economic cost of invoking the API. When API costs are high, lower similarity thresholds become acceptable to save money. When costs are low, higher similarity thresholds are enforced.
The decision is based on:
- similarity: Cosine similarity [0, 1]. Closeness to 1 = high accuracy.
- api_cost_per_1k_tokens: USD cost per 1000 tokens.
- expected_tokens: Expected token count for a fresh API call.
Initialize the cost-aware decision function.
| Parameter | Type | Description |
|---|---|---|
| `accuracy_weight` | float | Weight (0 to 1) assigned to accuracy mismatch penalty. Higher values prioritize accuracy over cost. Defaults to 0.7. |
def should_use_cache( similarity: float, api_cost_per_1k_tokens: float, expected_tokens: int ) -> bool
Decide whether to use a cached response.
Decision logic:
- Compute mismatch_penalty = (1 - similarity) * accuracy_weight
- Compute cost_incentive = min(estimated_cost * (1 - accuracy_weight), 1.0)
- Return mismatch_penalty < cost_incentive
When accuracy_weight=0.7 and similarity=0.95:
- mismatch_penalty = 0.05 * 0.7 = 0.035
- If estimated_cost >= 0.035/0.3 = 0.117 USD, cache is used.
- For gpt-4 (~$0.03 per 1k tokens), expected_tokens=100 → ~$0.003, cache is skipped (cost too low to accept small mismatch).
- For gpt-4 (~$0.03 per 1k tokens), expected_tokens=5000 → ~$0.15, cache is used (cost high enough to accept 5% mismatch).
| Parameter | Type | Description |
|---|---|---|
| `similarity` | float | Cosine similarity in [0, 1]. |
| `api_cost_per_1k_tokens` | float | Estimated USD cost per 1000 tokens. |
| `expected_tokens` | int | Expected token count for fresh call. |
| Type | Description |
|---|---|
| bool | True if the cached response should be used, False if a fresh API call should be made. |
DistributedLockInfo
Section titled “DistributedLockInfo”Lock ownership information for distributed locks.
EnvironmentConfigLoader
Section titled “EnvironmentConfigLoader”Loader for cache configuration from various sources.
Load CacheConfig from environment variables.
| Parameter | Type | Description |
|---|---|---|
| `prefix` | str | Environment variable prefix. |
| Type | Description |
|---|---|
| CacheConfig | Populated CacheConfig instance. |
Load CacheConfig from a dictionary.
| Parameter | Type | Description |
|---|---|---|
| `config_dict` | dict[str, Any] | Configuration dictionary. |
| Type | Description |
|---|---|
| CacheConfig | Populated CacheConfig instance. |
Load CacheConfig from a YAML file.
| Parameter | Type | Description |
|---|---|---|
| `file_path` | str | Path to the YAML configuration file. |
| Type | Description |
|---|---|
| CacheConfig | Populated CacheConfig instance. |
Load CacheConfig from a JSON file.
| Parameter | Type | Description |
|---|---|---|
| `file_path` | str | Path to the JSON configuration file. |
| Type | Description |
|---|---|
| CacheConfig | Populated CacheConfig instance. |
FaissVectorIndex
Section titled “FaissVectorIndex”FAISS-backed in-memory vector index for semantic cache.
Uses IndexFlatIP (inner product on L2-normalized vectors = cosine similarity). Suitable for single-server deployments with up to ~1M entries. Implements VectorIndexProtocol.
All vectors must be L2-normalized (unit length) before storage.
Initialize the FAISS vector index.
| Parameter | Type | Description |
|---|---|---|
| `embedding_dim` | int | Dimension of embedding vectors. Defaults to 384 (common for sentence-transformers like all-MiniLM-L6-v2). |
| `max_entries` | int | Maximum number of entries before warning. Defaults to 100,000. |
| Exception | Description |
|---|---|
| ImportError | If faiss package is not installed. |
| ValueError | If embedding_dim <= 0 or max_entries <= 0. |
Search for the top-k most similar embeddings.
| Parameter | Type | Description |
|---|---|---|
| `embedding` | list[float] | Query embedding vector (must be L2-normalized). |
| `k` | int | Number of results to return. |
| Type | Description |
|---|---|
| list[tuple[str, float]] | List of (cache_key, similarity_score) tuples, sorted by similarity_score descending. Empty list if index is empty. |
Add an embedding to the index.
If the key already exists, it is not duplicated (early return). If the index reaches max_entries, a warning is logged but the entry is still added (LRU eviction can be a future enhancement).
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key identifier for the embedding. |
| `embedding` | list[float] | Embedding vector (must be L2-normalized). |
Remove an entry from the index by cache key.
FAISS IndexFlatIP does not support deletion. We use a soft-delete approach by marking the slot with None. Removed entries are skipped in search results.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | Cache key identifier to remove. |
| Type | Description |
|---|---|
| bool | True if the key was found and removed, False if not found. |
Number of entries currently indexed (excluding deleted entries).
| Type | Description |
|---|---|
| int | Non-negative integer. |
HealthStatus
Section titled “HealthStatus”Unified health status.
JSONSerializer
Section titled “JSONSerializer”High-performance JSON-based serializer with enhanced type support.
Uses orjson (5-10x faster than stdlib json) for serialization with automatic fallback to stdlib json if orjson is unavailable.
Supports serialization of:
- Basic Python types (str, int, float, bool, None)
- Collections (list, dict, tuple, set)
- datetime and date objects (via orjson native support)
- UUID objects (via orjson native support)
- Custom objects with dict attribute
Performance:
- With orjson: ~5-10x faster than stdlib json
- Native support for datetime, UUID, dataclasses
- Optimized for Redis/Memcached serialization workloads
This class implements the AsyncStringSerializerProtocol protocol.
Serialize a Python object to JSON string using orjson (5-10x faster).
Deserialize a JSON string back to a Python object.
MemcachedBackendConfig
Section titled “MemcachedBackendConfig”Configuration for Memcached cache backend.
Attributes: name: Backend name. default: Whether this is the default backend. enabled: Whether this backend is enabled. host: Memcached host. port: Memcached port. servers: List of server addresses (host:port). default_ttl: Default TTL in seconds. key_prefix: Key prefix for namespace isolation. connect_timeout: Connection timeout in seconds. timeout: Operation timeout in seconds. max_pool_size: Maximum connection pool size.
Get the backend type.
Get the server list.
MemcachedCacheBackend
Section titled “MemcachedCacheBackend”Memcached cache backend implementation.
This backend provides Memcached-based distributed caching with TTL support. Unlike memory and Redis backends, this is implemented directly since the core infrastructure layer doesn’t provide a Memcached driver.
def __init__( servers: list[str], config: CacheOperationConfig | None = None )
Initialize the Memcached cache backend.
| Parameter | Type | Description |
|---|---|---|
| `servers` | list[str] | List of Memcached server addresses (e.g., ["localhost:11211"]). |
| `config` | CacheOperationConfig | None | Cache configuration. If None, uses default configuration. |
| Exception | Description |
|---|---|
| ImportError | If pymemcache is not installed. |
Get a value from the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to retrieve. |
| Type | Description |
|---|---|
| Any | None | The cached value if found and not expired, None otherwise. |
Set a value in the cache with optional TTL.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to set. |
| `value` | Any | The value to cache. |
| `ttl` | int | None | Time to live in seconds. If None, uses default TTL. |
| Type | Description |
|---|---|
| bool | True if successful, False otherwise. |
Delete a value from the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to delete. |
| Type | Description |
|---|---|
| bool | True if the key was deleted, False if it didn't exist or error occurred. |
Check if a key exists in the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to check. |
| Type | Description |
|---|---|
| bool | True if the key exists and is not expired, False otherwise. |
Clear all values from the cache.
Note: This operation is not supported by Memcached protocol. Memcached doesn’t provide a way to clear all keys.
| Type | Description |
|---|---|
| bool | False (operation not supported). |
Get multiple values from the cache.
| Parameter | Type | Description |
|---|---|---|
| `keys` | list[str] | List of cache keys to retrieve. |
| Type | Description |
|---|---|
| dict[str, Any] | Dictionary mapping keys to their values (missing keys omitted). |
Set multiple values in the cache.
| Parameter | Type | Description |
|---|---|---|
| `items` | dict[str, Any] | Dictionary of key-value pairs to cache. |
| `ttl` | int | None | Time to live in seconds for all items. |
| Type | Description |
|---|---|
| bool | True if all items were set successfully, False otherwise. |
Delete multiple values from the cache.
| Parameter | Type | Description |
|---|---|---|
| `keys` | list[str] | List of cache keys to delete. |
| Type | Description |
|---|---|
| bool | True if all keys were deleted successfully, False otherwise. |
Delete keys matching a pattern.
Memcached does not natively support key enumeration or pattern matching. This method logs a warning and returns 0. If pattern-based invalidation is required, use Redis or the in-memory backend instead.
| Parameter | Type | Description |
|---|---|---|
| `pattern` | str | Glob pattern (not supported by Memcached). |
| Type | Description |
|---|---|
| int | Always 0 — Memcached does not support key enumeration. |
Perform a health check on the cache backend.
| Type | Description |
|---|---|
| HealthCheckResult | Structured health check result. |
MemoryBackendConfig
Section titled “MemoryBackendConfig”Configuration for in-memory cache backend.
Attributes: name: Backend name. default: Whether this is the default backend. enabled: Whether this backend is enabled. default_ttl: Default TTL in seconds. max_size: Maximum number of entries. cleanup_interval: Cleanup interval in seconds. key_prefix: Key prefix for namespace isolation.
MemoryCacheBackend
Section titled “MemoryCacheBackend”Memory cache backend using lexigram-components MemoryStateStore.
This backend provides in-memory caching with TTL support by wrapping the standardized MemoryStateStore from lexigram-components.
def __init__( config: CacheOperationConfig | None = None, max_size: int | None = None, hooks: HookRegistryProtocol | None = None ) -> None
Initialize the memory cache backend.
| Parameter | Type | Description |
|---|---|---|
| `config` | CacheOperationConfig | None | Cache configuration. If None, uses default configuration. |
| `max_size` | int | None | Maximum number of cache entries. When the store is full the least-recently-used entry is evicted. ``None`` disables eviction (use only for tests / development). |
| `hooks` | HookRegistryProtocol | None | Optional hook registry for lifecycle emission. |
async def get(key: str) -> Result[Any | None, CacheError]
Get a value from the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to retrieve. |
| Type | Description |
|---|---|
| Result[Any | None, CacheError] | Ok(value) if found, Ok(None) if not found, Err(CacheError) on failure. |
async def set( key: str, value: Any, ttl: int | None = None ) -> Result[None, CacheError]
Set a value in the cache with optional TTL.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to set. |
| `value` | Any | The value to cache. |
| `ttl` | int | None | Time to live in seconds. If None, uses default TTL. |
| Type | Description |
|---|---|
| Result[None, CacheError] | Ok(None) if successful, Err(CacheError) on failure. |
async def delete(key: str) -> Result[bool, CacheError]
Delete a value from the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to delete. |
| Type | Description |
|---|---|
| Result[bool, CacheError] | Ok(True) if deleted, Ok(False) if not found, Err(CacheError) on failure. |
async def exists(key: str) -> Result[bool, CacheError]
Check if a key exists in the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to check. |
| Type | Description |
|---|---|
| Result[bool, CacheError] | Ok(True) if exists, Ok(False) otherwise, Err(CacheError) on failure. |
async def clear() -> Result[None, CacheError]
Clear all values from the cache.
| Type | Description |
|---|---|
| Result[None, CacheError] | Ok(None) if successful, Err(CacheError) on failure. |
async def get_many(keys: list[str]) -> Result[dict[str, Any], CacheError]
Get multiple values from the cache.
| Parameter | Type | Description |
|---|---|---|
| `keys` | list[str] | List of cache keys to retrieve. |
| Type | Description |
|---|---|
| Result[dict[str, Any], CacheError] | Ok(dict) mapping found keys to values, Err(CacheError) on failure. |
async def set_many( items: dict[str, Any], ttl: int | None = None ) -> Result[None, CacheError]
Set multiple values in the cache.
| Parameter | Type | Description |
|---|---|---|
| `items` | dict[str, Any] | Dictionary of key-value pairs to cache. |
| `ttl` | int | None | Time to live in seconds for all items. |
| Type | Description |
|---|---|
| Result[None, CacheError] | Ok(None) if all items set successfully, Err(CacheError) on failure. |
async def delete_many(keys: list[str]) -> Result[int, CacheError]
Delete multiple values from the cache.
| Parameter | Type | Description |
|---|---|---|
| `keys` | list[str] | List of cache keys to delete. |
| Type | Description |
|---|---|
| Result[int, CacheError] | Ok(count) of deleted keys, Err(CacheError) on failure. |
async def delete_pattern(pattern: str) -> Result[int, CacheError]
Delete all keys matching a glob-style pattern.
Iterates over the in-memory store’s known keys (including the configured prefix) and deletes those that match pattern.
| Parameter | Type | Description |
|---|---|---|
| `pattern` | str | Glob pattern, e.g. ``"pet:list:*"``. |
| Type | Description |
|---|---|
| Result[int, CacheError] | Ok(count) of deleted keys, Err(CacheError) on failure. |
Perform a health check on the cache backend.
| Type | Description |
|---|---|
| HealthCheckResult | Structured health check result. |
PickleSerializer
Section titled “PickleSerializer”HMAC-SHA256-signed pickle serializer for Python object caching.
Every value is signed before storage and the signature is verified before
deserialization. An invalid or missing signature raises
CacheSerializationError — the payload
is never passed to pickle.loads.
Supports key rotation: pass a list of keys so that data signed with an older key can still be verified during a rotation window. The first key in the list is always used for signing; all keys are tried for verification. Use rotate_key to prepend a new active key.
Advantages: - Preserves all Python object types and relationships. - HMAC authentication prevents RCE via cache poisoning.
Disadvantages: - Python-specific (not compatible with other languages). - Breaking format change from unsigned pickle storage. - Version compatibility issues across Python versions.
Only use in Python-only environments where the HMAC key is kept secret.
| Parameter | Type | Description |
|---|---|---|
| `hmac_key` | Secret key(s) used for HMAC-SHA256 signing, minimum 32 bytes each. Pass a ``str`` or ``bytes`` value for a single key, or a ``list[bytes]`` for multi-key rotation support. The first key in the list is the current signing key. Generate one with:: import secrets; secrets.token_bytes(32) | |
| `protocol` | Pickle protocol version (default: highest available). |
| Exception | Description |
|---|---|
| ValueError | If any key is shorter than 32 bytes, or if the key list is empty. |
def __init__( hmac_key: bytes | str | list[bytes], protocol: int = pickle.HIGHEST_PROTOCOL ) -> None
Prepend new_key as the active signing key.
After rotation the serializer signs new entries with new_key while still accepting entries signed with any previously registered key. Remove old keys from the list once all cache entries have expired or been re-signed.
| Parameter | Type | Description |
|---|---|---|
| `new_key` | bytes | New signing key, minimum 32 bytes. |
| Exception | Description |
|---|---|
| ValueError | If *new_key* is shorter than 32 bytes. |
Pickle value, sign it, and return a <hmac>:<hex> string.
Verify the HMAC signature and unpickle the payload.
RedisBackendConfig
Section titled “RedisBackendConfig”Configuration for Redis cache backend.
Attributes: name: Backend name. default: Whether this is the default backend. enabled: Whether this backend is enabled. host: Redis host. port: Redis port. db: Redis database number. password: Redis password (optional). url: Full Redis URL (overrides host/port/db). default_ttl: Default TTL in seconds. key_prefix: Key prefix for namespace isolation. ssl: Whether to use SSL/TLS. connection_pool_size: Connection pool size. socket_timeout: Socket timeout in seconds. retry_on_timeout: Whether to retry on timeout.
Get the backend type.
Get the full Redis URL.
RedisCacheBackend
Section titled “RedisCacheBackend”Redis cache backend using StateStoreProtocol protocol.
This backend provides Redis-based distributed caching with TTL support by wrapping the standardized StateStoreProtocol protocol.
def __init__( store: StateStoreProtocol, config: CacheOperationConfig | None = None, hooks: HookRegistryProtocol | None = None )
Initialize the Redis cache backend.
| Parameter | Type | Description |
|---|---|---|
| `store` | StateStoreProtocol | StateStoreProtocol implementation for key-value operations. |
| `config` | CacheOperationConfig | None | Cache configuration. If None, uses default configuration. |
| `hooks` | HookRegistryProtocol | None | Optional hook registry for lifecycle emission. |
async def get(key: str) -> Result[Any | None, CacheError]
Get a value from the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to retrieve. |
| Type | Description |
|---|---|
| Result[Any | None, CacheError] | Ok(value) if found, Ok(None) if not found, Err(CacheError) on failure. |
async def set( key: str, value: Any, ttl: int | None = None ) -> Result[None, CacheError]
Set a value in the cache with optional TTL.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to set. |
| `value` | Any | The value to cache. |
| `ttl` | int | None | Time to live in seconds. If None, uses default TTL. |
| Type | Description |
|---|---|
| Result[None, CacheError] | Ok(None) if successful, Err(CacheError) on failure. |
async def delete(key: str) -> Result[bool, CacheError]
Delete a value from the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to delete. |
| Type | Description |
|---|---|
| Result[bool, CacheError] | Ok(True) if deleted, Ok(False) if not found, Err(CacheError) on failure. |
async def exists(key: str) -> Result[bool, CacheError]
Check if a key exists in the cache.
| Parameter | Type | Description |
|---|---|---|
| `key` | str | The cache key to check. |
| Type | Description |
|---|---|
| Result[bool, CacheError] | Ok(True) if exists, Ok(False) otherwise, Err(CacheError) on failure. |
async def clear() -> Result[None, CacheError]
Clear all values from the cache.
Uses pattern-based deletion to clear only prefixed keys. This is safer than FLUSHDB as it only clears cache keys.
| Type | Description |
|---|---|
| Result[None, CacheError] | Ok(None) if successful, Err(CacheError) on failure. |
async def get_many(keys: list[str]) -> Result[dict[str, Any], CacheError]
Get multiple values from the cache.
| Parameter | Type | Description |
|---|---|---|
| `keys` | list[str] | List of cache keys to retrieve. |
| Type | Description |
|---|---|
| Result[dict[str, Any], CacheError] | Ok(dict) mapping found keys to values, Err(CacheError) on failure. |
async def set_many( items: dict[str, Any], ttl: int | None = None ) -> Result[None, CacheError]
Set multiple values in the cache using pipeline.
| Parameter | Type | Description |
|---|---|---|
| `items` | dict[str, Any] | Dictionary of key-value pairs to cache. |
| `ttl` | int | None | Time to live in seconds for all items. |
| Type | Description |
|---|---|
| Result[None, CacheError] | Ok(None) if all items set successfully, Err(CacheError) on failure. |
async def delete_many(keys: list[str]) -> Result[int, CacheError]
Delete multiple values from the cache using pipeline.
| Parameter | Type | Description |
|---|---|---|
| `keys` | list[str] | List of cache keys to delete. |
| Type | Description |
|---|---|
| Result[int, CacheError] | Ok(count) of deleted keys, Err(CacheError) on failure. |
async def delete_pattern(pattern: str) -> Result[int, CacheError]
Delete all Redis keys matching a glob-style pattern.
Uses SCAN (non-blocking) to find matching keys then DEL or pipeline to remove them. The configured key prefix is prepended to pattern automatically.
| Parameter | Type | Description |
|---|---|---|
| `pattern` | str | Glob pattern, e.g. ``"pet:list:*"``. |
| Type | Description |
|---|---|
| Result[int, CacheError] | Ok(count) of deleted keys, Err(CacheError) on failure. |
Perform a health check on the cache backend.
| Type | Description |
|---|---|
| HealthCheckResult | Structured health check result. |
RedisDriver
Section titled “RedisDriver”Shared Redis connection logic
RedisLockStore
Section titled “RedisLockStore”Redis implementation of LockStore with owner validation.
Locks are stored as the value of the key and contain a lock_id of the
form <owner>:<uuid>. Release and extend operations are executed via
Lua scripts that compare-and-delete or compare-and-expire atomically.
Acquire a lock.
| Parameter | Type | Description |
|---|---|---|
| `resource` | str | The resource to lock. |
| `ttl` | int | Lock TTL in seconds. |
| `owner` | str | Owner identifier (required). |
| Type | Description |
|---|---|
| str | None | lock_id (str) on success, None on failure. |
Release a lock only if lock_id matches the current owner.
| Exception | Description |
|---|---|
| LockNotHeldError | if the lock does not exist |
| LockOwnershipError | if the lock is held by someone else |
Extend TTL of a lock only if lock_id matches the current owner.
| Exception | Description |
|---|---|
| LockNotHeldError | if the lock does not exist |
| LockOwnershipError | if the lock is held by someone else |
Check if a key is locked
Async context manager that acquires a lock and releases it on exit.
Yields the lock_id string.
Check the health of the lock store
RedisSecretStore
Section titled “RedisSecretStore”Redis implementation of SecretStore with optional at-rest encryption.
| Parameter | Type | Description |
|---|---|---|
| `url` | Redis connection URL (e.g. ``redis://localhost:6379/0``). | |
| `prefix` | Optional string prefix applied to every key stored in Redis. | |
| `encryption_key` | Optional Fernet symmetric encryption key (32-byte URL-safe base64-encoded bytes or string). When provided all secret values are encrypted before writing and decrypted on read. Generate one with:: import secrets, base64 key = base64.urlsafe_b64encode(secrets.token_bytes(32)) |
def __init__( url: str = '', prefix: str = '', encryption_key: bytes | str | None = None, client: Any | None = None )
Get a secret by name, decrypting if encryption is configured.
Set a secret, encrypting if encryption is configured.
Delete a secret
List secrets with optional prefix
Check the health of the secret store
RedisStateStore
Section titled “RedisStateStore”Redis implementation of StateStoreProtocol
Get a value by key
Set a value with optional TTL
Delete a value by key
Get multiple values by keys
Check the health of the state store
SemanticCacheStore
Section titled “SemanticCacheStore”Three-tier semantic cache: exact hash → vector similarity → miss.
Implements SemanticCacheProtocol. Caches LLM responses using a three-tier lookup strategy:
Tier 1 (Exact Hash): Normalizes and hashes the query to look for exact matches in the cache backend. Fast, deterministic.
Tier 2 (Vector Similarity): If Tier 1 misses, embeds the query and searches the vector index for similar cached queries. Returns the cached response if similarity >= similarity_threshold.
Tier 3 (Cache Miss): If both tiers miss, returns None for the caller to invoke the LLM.
The store() method populates both Tier 1 and Tier 2. The invalidate() method removes from both.
def __init__( cache_backend: CacheBackendProtocol, embedding_client: EmbeddingClientProtocol, vector_index: VectorIndexProtocol, similarity_threshold: float = 0.95, cache_ttl: int | None = None ) -> None
Initialize the semantic cache store.
| Parameter | Type | Description |
|---|---|---|
| `cache_backend` | CacheBackendProtocol | Backend for exact hash storage (e.g., Redis, in-memory cache). Implements CacheBackendProtocol. |
| `embedding_client` | EmbeddingClientProtocol | LLM embedding service. Implements EmbeddingClientProtocol. |
| `vector_index` | VectorIndexProtocol | Vector similarity search index. Implements VectorIndexProtocol. |
| `similarity_threshold` | float | Minimum cosine similarity to accept a Tier 2 cache hit. Must be in [0, 1]. Defaults to 0.95. |
| `cache_ttl` | int | None | Optional TTL in seconds for cache entries. Passed to cache_backend.set(). Defaults to None (no expiry). |
| Exception | Description |
|---|---|
| ValueError | If similarity_threshold is not in [0, 1]. |
Look up a query in the cache.
Checks Tier 1 (exact hash) then Tier 2 (vector similarity).
| Parameter | Type | Description |
|---|---|---|
| `query` | str | The user query string. |
| Type | Description |
|---|---|
| str | None | Cached response string, or None on cache miss. |
Store a query-response pair in both tiers.
Stores in Tier 1 (hash key) and Tier 2 (vector index).
| Parameter | Type | Description |
|---|---|---|
| `query` | str | The user query string. |
| `response` | str | The LLM response to cache. |
| `model` | str | The model that produced the response (for logging/tracking). |
Invalidate a cached entry by query.
Removes from both Tier 1 (cache backend) and Tier 2 (vector index).
| Parameter | Type | Description |
|---|---|---|
| `query` | str | The user query string to invalidate. |
| Type | Description |
|---|---|
| bool | True if the entry was found and removed, False otherwise. |
TaggedCacheKey
Section titled “TaggedCacheKey”
Functions
Section titled “Functions”
def cache( service: CacheService, key_prefix: str = 'cache', ttl: int | None = None, backend: str | None = None, protect: bool = True ) -> Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]]
Decorator to cache function results with custom key generation.
| Parameter | Type | Description |
|---|---|---|
| `service` | CacheService | CacheService instance |
| `key_prefix` | str | Prefix for cache keys |
| `ttl` | int | None | Time-to-live in seconds |
| `backend` | str | None | Backend name |
| `protect` | bool | Enable stampede protection |
| Type | Description |
|---|---|
| Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]] | Decorator function |
Example
@cache(cache_service, key_prefix=“api”, ttl=300) async def get_user_data(user_id: int): return await fetch_from_database(user_id)
cache_in_request
Section titled “cache_in_request”
Cache the result of an async function for the duration of the current request.
The cache key is derived from the function’s qualified name and all
positional and keyword arguments. Arguments must support a stable
repr() so that logically equal calls produce the same key.
- Same request, same args → cached value returned; wrapped coroutine is not awaited again.
- Same request, different args → each distinct arg combination gets its own cache entry.
- Different requests → each request has an isolated contextvars.ContextVar context; they never share entries.
| Parameter | Type | Description |
|---|---|---|
| `func` | _F | The async callable to wrap. Must be a coroutine function. |
| Type | Description |
|---|---|
| _F | A coroutine function with identical signature that transparently caches results within the current request context. |
Example
@cache_in_requestasync def get_user_permissions(user_id: str) -> list[str]: return await db.fetch_permissions(user_id)cacheable
Section titled “cacheable”
def cacheable( ttl: int | None = 300, key_prefix: str = '', skip_on_error: bool = True ) -> Callable[[F], F]
Cache the return value of an async method using the injected CacheBackend.
The decorator resolves the cache backend from self._cache on the target
instance. The cache key is derived from the method name, prefix, and a
SHA-256 hash of the serialized arguments.
Supports methods that return Result[T, E]:
Ok(value)results are cached; the inner value is unwrapped, serialised, and re-wrapped inOkon retrieval.Err(...)results are never cached — errors propagate on every call.
Supports domain model return values (or Result-wrapped domain models):
- Domain models are serialised to a type-tagged JSON envelope and
reconstructed via
model_validateon retrieval, preserving type identity across the JSON round-trip.
| Parameter | Type | Description |
|---|---|---|
| `ttl` | int | None | Time-to-live in seconds. ``None`` disables expiry. |
| `key_prefix` | str | Optional prefix prepended to the generated cache key. |
| `skip_on_error` | bool | When ``True``, cache misses on backend errors gracefully fall through to the original method. When ``False``, errors propagate. |
Example
class UserService: def __init__(self, cache: CacheBackendProtocol) -> None: self._cache = cache
@cacheable(ttl=60, key_prefix="users") async def get_user(self, user_id: str) -> Result[User, DomainError]: return await self._repo.find(user_id)clear_request_cache
Section titled “clear_request_cache”
Reset the current request’s cache to an empty state.
Call this at the end of a request (e.g. in ASGI middleware teardown) if you need an explicit eviction guarantee rather than relying on the context being discarded by the ASGI server.
conditional_cache
Section titled “conditional_cache”
def conditional_cache( service: CacheService, condition_func: Callable[[Any], bool], key_prefix: str = 'cond', ttl: int | None = None, backend: str | None = None, protect: bool = True ) -> Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]]
Decorator to cache function results only when condition is met.
| Parameter | Type | Description |
|---|---|---|
| `service` | CacheService | CacheService instance |
| `condition_func` | Callable[[Any], bool] | Function that returns True if result should be cached |
| `key_prefix` | str | Prefix for cache keys |
| `ttl` | int | None | Time-to-live in seconds |
| `backend` | str | None | Backend name |
| `protect` | bool | Enable stampede protection |
| Type | Description |
|---|---|
| Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]] | Decorator function |
Example
@conditional_cache( cache_service, lambda result: result is not None, ttl=300 ) async def search_database(query: str): return await db.search(query)
get_request_cache
Section titled “get_request_cache”
Return the current request’s cache dict, creating it if needed.
The first call within a new request context allocates a fresh dict
and stores it via ContextVar.set so that all subsequent calls in
the same context share it.
| Type | Description |
|---|---|
| dict[str, Any] | The mutable cache dict for this request context. |
invalidate_cache
Section titled “invalidate_cache”
def invalidate_cache( service: CacheService, key_pattern: str, backend: str | None = None ) -> Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]]
Decorator to invalidate cache keys after function execution.
| Parameter | Type | Description |
|---|---|---|
| `service` | CacheService | CacheService instance |
| `key_pattern` | str | Pattern for keys to invalidate (supports wildcards) |
| `backend` | str | None | Backend name |
| Type | Description |
|---|---|
| Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]] | Decorator function |
Example
@invalidate_cache(cache_service, “user:*”) async def update_user(user_id: int, data: dict): return await db.update_user(user_id, data)
remember
Section titled “remember”
def remember( service: CacheService, ttl: int | None = None, backend: str | None = None, protect: bool = True ) -> Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]]
Decorator to remember function results (memoization).
Uses automatic key generation based on function signature.
| Parameter | Type | Description |
|---|---|---|
| `service` | CacheService | CacheService instance |
| `ttl` | int | None | Time-to-live in seconds |
| `backend` | str | None | Backend name |
| `protect` | bool | Enable stampede protection |
| Type | Description |
|---|---|
| Callable[[Callable[Ellipsis, Any]], Callable[Ellipsis, Any]] | Decorator function |
Example
@remember(cache_service, ttl=600) async def expensive_computation(x: int, y: int): return await slow_calculation(x, y)
Exceptions
Section titled “Exceptions”CacheBackendError
Section titled “CacheBackendError”Raised when the cache backend (Redis/Memcached) fails.
CacheCapacityError
Section titled “CacheCapacityError”Raised when the cache is at capacity.
CacheConfigurationError
Section titled “CacheConfigurationError”Raised when cache configuration is invalid.
CacheConnectionError
Section titled “CacheConnectionError”Raised when the cache connection fails.
CacheError
Section titled “CacheError”Base exception for cache errors.
CacheInvalidationError
Section titled “CacheInvalidationError”Raised when cache invalidation fails.
CacheKeyError
Section titled “CacheKeyError”Raised when a cache key is invalid or for key-specific errors.
CacheSerializationError
Section titled “CacheSerializationError”Raised when serialization/deserialization fails.
CacheStampedeError
Section titled “CacheStampedeError”Raised when cache stampede protection fails.
CacheTimeoutError
Section titled “CacheTimeoutError”Raised when a cache operation times out.