Governance API Reference¶
This page documents the complete API for Clearstone's governance system.
Core Module¶
clearstone.core.policy
¶
PolicyEngine
¶
The central engine that evaluates registered policies against a given context.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policies
|
Optional[List[Callable]]
|
An optional list of decorated policy functions to use. If provided, this exact list will be used, and auto-discovery of other policies will be skipped. If None (default), the engine will auto-discover all imported @Policy-decorated functions. |
None
|
audit_trail
|
Optional[AuditTrail]
|
Optional AuditTrail instance. If None, creates a new one. |
None
|
metrics
|
Optional[PolicyMetrics]
|
Optional PolicyMetrics instance. If None, creates a new one. |
None
|
evaluate(context=None)
¶
Evaluates an action against all registered policies using a composable veto model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
context
|
Optional[PolicyContext]
|
The PolicyContext for the evaluation. If None, it's retrieved from the active context variable scope. |
None
|
get_audit_trail(limit=100)
¶
Returns the most recent audit trail entries.
PolicyInfo
dataclass
¶
Metadata about a registered policy function.
Policy(name, priority=0)
¶
Decorator to register a function as a Clearstone policy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
A unique, human-readable identifier for the policy. |
required |
priority
|
int
|
An integer determining execution order. Higher numbers run first. |
0
|
Example
@Policy(name="block_admin_tools_for_guests", priority=100) def my_policy(context: PolicyContext) -> Decision: if context.metadata.get("role") == "guest": return BLOCK("Guests cannot access admin tools.") return ALLOW
get_policies()
¶
Returns all registered policies, sorted by priority (descending).
reset_policies()
¶
Clears the global policy registry. Primarily for testing.
clearstone.core.actions
¶
ActionType
¶
Bases: Enum
Defines the set of possible outcomes from a policy evaluation. These are simple, stateless actions represented as singleton instances.
Decision
dataclass
¶
BLOCK(reason, **metadata)
¶
Factory function to create a BLOCK decision. A reason is mandatory.
PAUSE(reason, intervention_id=None, **metadata)
¶
Creates a PAUSE decision, signaling a need for human intervention. An intervention_id is automatically generated if not provided.
REDACT(reason, fields, **metadata)
¶
Factory function to create a REDACT decision. A list of fields is mandatory.
clearstone.core.context
¶
PolicyContext
dataclass
¶
Immutable execution context for a policy evaluation. It is propagated via contextvars, making it safe for threaded and asynchronous environments.
current()
classmethod
¶
Retrieves the current context from the context variable.
context_scope(context)
¶
A context manager for safely setting and automatically resetting the policy context.
Usage
ctx = create_context(...) with context_scope(ctx): decision = policy_engine.evaluate()
create_context(user_id, agent_id, session_id=None, **metadata)
¶
Factory function for conveniently creating a new PolicyContext instance.
get_current_context()
¶
Retrieves the current policy context from the active scope. Returns None if no context has been set.
set_current_context(context)
¶
Manually sets the current policy context for the active scope.
It is often safer to use the context_scope context manager.
clearstone.core.exceptions
¶
Policy Engine¶
The PolicyEngine discovers, evaluates, and enforces policies at runtime.
Initialization¶
from clearstone import PolicyEngine
# Auto-discovery mode (default)
engine = PolicyEngine()
# Explicit configuration mode
engine = PolicyEngine(policies=[policy1, policy2])
# With audit trail and metrics
from clearstone import AuditTrail, PolicyMetrics
audit = AuditTrail()
metrics = PolicyMetrics()
engine = PolicyEngine(
policies=[policy1, policy2], # Optional
audit_trail=audit,
metrics=metrics
)
Parameters:
policies(Optional[List[Callable]]): List of policy functions to use. If provided, only these policies will be evaluated (no auto-discovery). If None (default), all imported@Policy-decorated functions are discovered automatically.audit_trail(Optional[AuditTrail]): Custom audit trail instance for logging decisions. If None, creates a new instance.metrics(Optional[PolicyMetrics]): Custom metrics instance for tracking performance. If None, creates a new instance.
Raises:
ValueError: If no valid policies are found (either through auto-discovery or explicit list)
Pre-Built Policies¶
clearstone.policies.common
¶
Pre-built policy library for common governance scenarios.
This module provides battle-tested policies for: - Token & Cost Control - Role-Based Access Control (RBAC) - PII & Sensitive Data Protection - Dangerous Operation Prevention - Security Alerts - Time-Based Restrictions - Local System & Performance
admin_only_action_policy(context)
¶
Block execution unless user is admin.
Example
metadata = { "user_role": "user", "tool_name": "delete_all_users", "require_admin_for": ["delete_all_users", "export_database"] }
alert_on_failed_auth_policy(context)
¶
Alert on suspicious authentication failures.
Example
metadata = { "auth_failed": True, "attempt_count": 5 }
alert_on_privileged_access_policy(context)
¶
Alert security team when privileged tools are accessed.
Example
metadata = { "tool_name": "export_all_data", "privileged_tools": ["export_all_data", "admin_console", "grant_permissions"] }
block_dangerous_tools_policy(context)
¶
Block inherently dangerous tools (delete, drop, truncate, etc).
Example
metadata = { "tool_name": "drop_table" }
block_external_apis_policy(context)
¶
Block calls to external APIs that are not whitelisted.
Example
metadata = { "tool_name": "call_third_party_api", "external_api_tools": ["call_third_party_api", "fetch_weather"], "whitelisted_apis": ["fetch_weather"] }
block_pii_tools_policy(context)
¶
Block access to PII-sensitive tools for non-privileged users.
Example
metadata = { "user_role": "guest", "tool_name": "fetch_ssn", "pii_tools": ["fetch_ssn", "get_credit_card", "view_medical_records"] }
business_hours_only_policy(context)
¶
Block execution outside of business hours.
Example
metadata = { "current_hour": 22, "business_hours": (9, 17) }
create_audit_mode_policies()
¶
Create policies for full audit logging.
Combines: - Alert on all privileged access - Alert on failed auth - RBAC enforcement - Rate limiting
Returns:
| Type | Description |
|---|---|
List[Callable]
|
List of policy functions |
create_cost_control_policies()
¶
Create policies for strict cost control.
Combines: - Token limits - Session cost limits - Daily cost limits - High cost approval
Returns:
| Type | Description |
|---|---|
List[Callable]
|
List of policy functions |
create_data_protection_policies()
¶
Create policies for sensitive data protection.
Combines: - PII redaction - Block PII tools for non-privileged users - Admin-only access to sensitive data
Returns:
| Type | Description |
|---|---|
List[Callable]
|
List of policy functions |
create_safe_mode_policies()
¶
Create a set of policies for 'safe mode' (conservative execution).
Combines: - Block dangerous tools - Pause before writes - Token limits - Alert on privileged access
Returns:
| Type | Description |
|---|---|
List[Callable]
|
List of policy functions ready to be used |
create_security_policies()
¶
Create comprehensive security policies.
Combines: - RBAC - Admin-only actions - Block PII tools - Alert on failed auth - Alert on privileged access - Block dangerous tools
Returns:
| Type | Description |
|---|---|
List[Callable]
|
List of policy functions |
daily_cost_limit_policy(context)
¶
Block execution if daily cost exceeds a threshold.
Example
metadata = { "daily_cost_limit": 1000.0, "daily_cost": 1250.0 }
model_health_check_policy(context)
¶
Performs a quick health check on a local model server endpoint before allowing an LLM call to proceed.
This prevents wasted time and retry loops when a local model server (Ollama, LM Studio, etc.) is down or unhealthy.
Example
metadata = { "local_model_health_url": "http://localhost:11434/api/tags", "health_check_timeout": 1.0 }
pause_before_write_policy(context)
¶
Pause (for manual review) before any write/delete operation.
Example
metadata = { "tool_name": "delete_user", "require_pause_for": ["create", "update", "delete", "modify"] }
rate_limit_policy(context)
¶
Block execution if rate limit is exceeded.
Example
metadata = { "rate_limit": 100, "rate_count": 105 }
rbac_tool_access_policy(context)
¶
Block tool access based on user role.
Example
metadata = { "user_role": "guest", "tool_name": "delete_database", "restricted_tools": { "guest": ["delete_database", "admin_panel"], "user": ["admin_panel"] } }
redact_pii_policy(context)
¶
Redact PII fields from outputs automatically.
Example
metadata = { "tool_name": "fetch_user_data", "pii_fields": { "fetch_user_data": ["ssn", "credit_card", "email"], "get_medical_records": ["diagnosis", "prescription"] } }
require_approval_for_high_cost_policy(context)
¶
Pause for approval if operation cost exceeds threshold.
Example
metadata = { "operation_cost": 25.0, "high_cost_threshold": 10.0 }
session_cost_limit_policy(context)
¶
Alert if session cost exceeds a threshold.
Example
metadata = { "session_cost_limit": 50.0, "session_cost": 55.0 }
system_load_policy(context)
¶
Blocks new, intensive actions if the local system's CPU or memory is overloaded. This is a critical guardrail for users running local LLMs to prevent system freezes.
token_limit_policy(context)
¶
Block execution if token usage exceeds a threshold.
Example
metadata = { "token_limit": 5000, "tokens_used": 6000 }
Utility Functions¶
compose_and¶
Compose multiple policies with AND logic.
from clearstone import compose_and
from clearstone.policies.common import token_limit_policy, cost_limit_policy
combined = compose_and(token_limit_policy, cost_limit_policy)
clearstone.utils.composition.compose_and(*policies)
¶
Creates a new composite policy where ALL underlying policies must ALLOW an action.
This is a fail-safe composition. The moment any policy returns a BLOCK, the entire composition immediately returns that BLOCK decision and stops further evaluation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*policies
|
Callable[[PolicyContext], Decision]
|
A sequence of policy functions to compose. |
()
|
Returns:
| Type | Description |
|---|---|
Callable[[PolicyContext], Decision]
|
A new policy function that can be used by the PolicyEngine. |
Example
combined = compose_and(token_limit_policy, rbac_policy, business_hours_policy)
@Policy(name="combined_policy", priority=100) def my_policy(context): return combined(context)
compose_or¶
Compose multiple policies with OR logic.
clearstone.utils.composition.compose_or(*policies)
¶
Creates a new composite policy where ANY of the underlying policies can ALLOW an action.
This composition returns the decision of the first policy that does not BLOCK. If all policies return BLOCK, it returns the decision of the first policy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*policies
|
Callable[[PolicyContext], Decision]
|
A sequence of policy functions to compose. |
()
|
Returns:
| Type | Description |
|---|---|
Callable[[PolicyContext], Decision]
|
A new policy function. |
Example
either = compose_or(admin_access_policy, emergency_override_policy)
@Policy(name="flexible_access", priority=90) def my_policy(context): return either(context)
Developer Tools¶
PolicyValidator¶
Validate policies before deployment.
from clearstone import PolicyValidator
validator = PolicyValidator()
failures = validator.run_all_checks(my_policy)
clearstone.utils.validator.PolicyValidator
¶
A tool for running pre-deployment checks on policy functions to ensure they are safe, performant, and deterministic.
Example
validator = PolicyValidator() failures = validator.run_all_checks(my_policy) if failures: print("Policy validation failed:") for failure in failures: print(f" - {failure}")
__init__(default_context=None)
¶
Initializes the validator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
default_context
|
PolicyContext
|
A sample PolicyContext to use for tests. If None, a generic one will be created. |
None
|
run_all_checks(policy)
¶
Runs all validation checks on a single policy and returns a list of failures.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
A list of strings, where each string is an error message. An empty list means all checks passed. |
Example
validator = PolicyValidator() failures = validator.run_all_checks(my_policy) if not failures: print("All checks passed!")
validate_determinism(policy, num_runs=5)
¶
Checks if a policy returns the same output for the same input. This catches policies that rely on non-deterministic functions (e.g., random, datetime.now()).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
num_runs
|
int
|
Number of times to run the policy to check consistency. |
5
|
Raises:
| Type | Description |
|---|---|
PolicyValidationError
|
If the policy produces different decisions for the same context. |
validate_exception_safety(policy)
¶
Checks if a policy crashes when given a context with missing metadata. A safe policy should handle missing keys gracefully (e.g., using .get() with defaults).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
Raises:
| Type | Description |
|---|---|
PolicyValidationError
|
If the policy raises an unexpected exception. |
validate_performance(policy, max_latency_ms=1.0, num_runs=1000)
¶
Checks if a policy executes within a given latency budget. This catches slow policies that might perform network requests or heavy computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
max_latency_ms
|
float
|
Maximum acceptable average latency in milliseconds. |
1.0
|
num_runs
|
int
|
Number of runs to average over. |
1000
|
Raises:
| Type | Description |
|---|---|
PolicyValidationError
|
If the policy's average execution time exceeds the threshold. |
PolicyDebugger¶
Debug policy decision-making.
from clearstone import PolicyDebugger
debugger = PolicyDebugger()
decision, trace = debugger.trace_evaluation(my_policy, context)
clearstone.utils.debugging.PolicyDebugger
¶
Provides tools to trace the execution of a single policy function, offering insight into its decision-making process.
Example
debugger = PolicyDebugger() decision, trace = debugger.trace_evaluation(my_policy, context) print(debugger.format_trace(my_policy, decision, trace))
__init__()
¶
Initializes the PolicyDebugger.
format_trace(policy, decision, trace)
¶
Formats the output of a trace_evaluation into a human-readable string.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable
|
The policy function that was traced. |
required |
decision
|
Decision
|
The final decision from the policy. |
required |
trace
|
List[Dict[str, Any]]
|
The trace events from trace_evaluation. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A formatted string showing the execution path. |
Example
formatted = debugger.format_trace(my_policy, decision, trace) print(formatted)
trace_evaluation(policy, context)
¶
Executes a policy and records each line of code that runs, along with the state of local variables at that line.
This uses Python's sys.settrace for a robust, line-by-line trace.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to debug. |
required |
context
|
PolicyContext
|
The PolicyContext to run the policy against. |
required |
Returns:
| Type | Description |
|---|---|
Decision
|
A tuple containing: |
List[Dict[str, Any]]
|
|
Tuple[Decision, List[Dict[str, Any]]]
|
|
Example
decision, trace = debugger.trace_evaluation(my_policy, ctx) for event in trace: print(f"Line {event['line_no']}: {event['line_text']}")
PolicyMetrics¶
Track policy performance metrics.
from clearstone import PolicyMetrics
metrics = PolicyMetrics()
engine = PolicyEngine(metrics=metrics)
clearstone.utils.metrics.PolicyMetrics
¶
A simple, in-memory collector for policy performance and decision metrics. This class is zero-dependency and designed for local-first analysis.
get_slowest_policies(top_n=5)
¶
Returns the top N policies sorted by average latency.
get_top_blocking_policies(top_n=5)
¶
Returns the top N policies that blocked most often.
record(policy_name, decision, latency_ms)
¶
Records a single policy evaluation event.
summary()
¶
Returns a summary of all collected metrics, calculating averages.
AuditTrail¶
Generate exportable audit logs.
clearstone.utils.audit.AuditTrail
¶
Captures and provides utilities for analyzing a sequence of policy decisions.
Example
audit = AuditTrail() engine = PolicyEngine(audit_trail=audit)
... run policies ...¶
print(audit.summary()) audit.to_json("audit_log.json")
get_entries(limit=0)
¶
Returns the recorded audit entries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
If > 0, returns only the last N entries. If 0, returns all. |
0
|
Returns:
| Type | Description |
|---|---|
List[Dict[str, Any]]
|
List of audit entry dictionaries. |
record_decision(policy_name, context, decision, error=None)
¶
Records a single policy evaluation event.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy_name
|
str
|
Name of the policy that made the decision. |
required |
context
|
PolicyContext
|
The PolicyContext for this evaluation. |
required |
decision
|
Decision
|
The Decision returned by the policy. |
required |
error
|
str
|
Optional error message if the policy raised an exception. |
None
|
summary()
¶
Calculates and returns a summary of the decisions in the trail.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with summary statistics including: |
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
to_csv(filepath, **kwargs)
¶
Exports the audit trail to a CSV file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to the output CSV file. |
required |
**kwargs
|
Additional arguments passed to csv.DictWriter. |
{}
|
Example
audit.to_csv("audit_log.csv")
to_json(filepath, **kwargs)
¶
Exports the audit trail to a JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to the output JSON file. |
required |
**kwargs
|
Additional arguments passed to json.dump(). |
{}
|
Example
audit.to_json("audit_log.json", indent=2)