Utilities API Reference¶
This page documents utility modules and helper functions.
Policy Composition¶
Utilities for composing multiple policies into complex logic.
clearstone.utils.composition
¶
Policy composition utilities for combining multiple policies with logical operators.
compose_and(*policies)
¶
Creates a new composite policy where ALL underlying policies must ALLOW an action.
This is a fail-safe composition. The moment any policy returns a BLOCK, the entire composition immediately returns that BLOCK decision and stops further evaluation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*policies
|
Callable[[PolicyContext], Decision]
|
A sequence of policy functions to compose. |
()
|
Returns:
| Type | Description |
|---|---|
Callable[[PolicyContext], Decision]
|
A new policy function that can be used by the PolicyEngine. |
Example
combined = compose_and(token_limit_policy, rbac_policy, business_hours_policy)
@Policy(name="combined_policy", priority=100) def my_policy(context): return combined(context)
compose_or(*policies)
¶
Creates a new composite policy where ANY of the underlying policies can ALLOW an action.
This composition returns the decision of the first policy that does not BLOCK. If all policies return BLOCK, it returns the decision of the first policy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*policies
|
Callable[[PolicyContext], Decision]
|
A sequence of policy functions to compose. |
()
|
Returns:
| Type | Description |
|---|---|
Callable[[PolicyContext], Decision]
|
A new policy function. |
Example
either = compose_or(admin_access_policy, emergency_override_policy)
@Policy(name="flexible_access", priority=90) def my_policy(context): return either(context)
Policy Validation¶
Tools for validating policies before deployment.
clearstone.utils.validator
¶
Policy validation tools for pre-deployment checks.
This module provides utilities to validate that policies are safe, performant, and deterministic before deploying them to production.
PolicyValidationError
¶
Bases: AssertionError
Custom exception for policy validation failures.
PolicyValidator
¶
A tool for running pre-deployment checks on policy functions to ensure they are safe, performant, and deterministic.
Example
validator = PolicyValidator() failures = validator.run_all_checks(my_policy) if failures: print("Policy validation failed:") for failure in failures: print(f" - {failure}")
__init__(default_context=None)
¶
Initializes the validator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
default_context
|
PolicyContext
|
A sample PolicyContext to use for tests. If None, a generic one will be created. |
None
|
run_all_checks(policy)
¶
Runs all validation checks on a single policy and returns a list of failures.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
A list of strings, where each string is an error message. An empty list means all checks passed. |
Example
validator = PolicyValidator() failures = validator.run_all_checks(my_policy) if not failures: print("All checks passed!")
validate_determinism(policy, num_runs=5)
¶
Checks if a policy returns the same output for the same input. This catches policies that rely on non-deterministic functions (e.g., random, datetime.now()).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
num_runs
|
int
|
Number of times to run the policy to check consistency. |
5
|
Raises:
| Type | Description |
|---|---|
PolicyValidationError
|
If the policy produces different decisions for the same context. |
validate_exception_safety(policy)
¶
Checks if a policy crashes when given a context with missing metadata. A safe policy should handle missing keys gracefully (e.g., using .get() with defaults).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
Raises:
| Type | Description |
|---|---|
PolicyValidationError
|
If the policy raises an unexpected exception. |
validate_performance(policy, max_latency_ms=1.0, num_runs=1000)
¶
Checks if a policy executes within a given latency budget. This catches slow policies that might perform network requests or heavy computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to validate. |
required |
max_latency_ms
|
float
|
Maximum acceptable average latency in milliseconds. |
1.0
|
num_runs
|
int
|
Number of runs to average over. |
1000
|
Raises:
| Type | Description |
|---|---|
PolicyValidationError
|
If the policy's average execution time exceeds the threshold. |
Policy Debugging¶
Tools for debugging policy decision-making.
clearstone.utils.debugging
¶
Policy debugging tools for tracing execution and understanding policy decisions.
PolicyDebugger
¶
Provides tools to trace the execution of a single policy function, offering insight into its decision-making process.
Example
debugger = PolicyDebugger() decision, trace = debugger.trace_evaluation(my_policy, context) print(debugger.format_trace(my_policy, decision, trace))
__init__()
¶
Initializes the PolicyDebugger.
format_trace(policy, decision, trace)
¶
Formats the output of a trace_evaluation into a human-readable string.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable
|
The policy function that was traced. |
required |
decision
|
Decision
|
The final decision from the policy. |
required |
trace
|
List[Dict[str, Any]]
|
The trace events from trace_evaluation. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A formatted string showing the execution path. |
Example
formatted = debugger.format_trace(my_policy, decision, trace) print(formatted)
trace_evaluation(policy, context)
¶
Executes a policy and records each line of code that runs, along with the state of local variables at that line.
This uses Python's sys.settrace for a robust, line-by-line trace.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy
|
Callable[[PolicyContext], Decision]
|
The policy function to debug. |
required |
context
|
PolicyContext
|
The PolicyContext to run the policy against. |
required |
Returns:
| Type | Description |
|---|---|
Decision
|
A tuple containing: |
List[Dict[str, Any]]
|
|
Tuple[Decision, List[Dict[str, Any]]]
|
|
Example
decision, trace = debugger.trace_evaluation(my_policy, ctx) for event in trace: print(f"Line {event['line_no']}: {event['line_text']}")
Policy Metrics¶
Performance tracking for policies.
clearstone.utils.metrics
¶
Policy performance and decision metrics collector.
PolicyMetrics
¶
A simple, in-memory collector for policy performance and decision metrics. This class is zero-dependency and designed for local-first analysis.
get_slowest_policies(top_n=5)
¶
Returns the top N policies sorted by average latency.
get_top_blocking_policies(top_n=5)
¶
Returns the top N policies that blocked most often.
record(policy_name, decision, latency_ms)
¶
Records a single policy evaluation event.
summary()
¶
Returns a summary of all collected metrics, calculating averages.
Audit Trail¶
Audit logging for compliance and analysis.
clearstone.utils.audit
¶
Audit trail utilities for capturing and analyzing policy decisions.
AuditTrail
¶
Captures and provides utilities for analyzing a sequence of policy decisions.
Example
audit = AuditTrail() engine = PolicyEngine(audit_trail=audit)
... run policies ...¶
print(audit.summary()) audit.to_json("audit_log.json")
get_entries(limit=0)
¶
Returns the recorded audit entries.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
limit
|
int
|
If > 0, returns only the last N entries. If 0, returns all. |
0
|
Returns:
| Type | Description |
|---|---|
List[Dict[str, Any]]
|
List of audit entry dictionaries. |
record_decision(policy_name, context, decision, error=None)
¶
Records a single policy evaluation event.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
policy_name
|
str
|
Name of the policy that made the decision. |
required |
context
|
PolicyContext
|
The PolicyContext for this evaluation. |
required |
decision
|
Decision
|
The Decision returned by the policy. |
required |
error
|
str
|
Optional error message if the policy raised an exception. |
None
|
summary()
¶
Calculates and returns a summary of the decisions in the trail.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with summary statistics including: |
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
to_csv(filepath, **kwargs)
¶
Exports the audit trail to a CSV file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to the output CSV file. |
required |
**kwargs
|
Additional arguments passed to csv.DictWriter. |
{}
|
Example
audit.to_csv("audit_log.csv")
to_json(filepath, **kwargs)
¶
Exports the audit trail to a JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to the output JSON file. |
required |
**kwargs
|
Additional arguments passed to json.dump(). |
{}
|
Example
audit.to_json("audit_log.json", indent=2)
Serialization¶
Hybrid JSON/pickle serialization utilities.
clearstone.serialization.hybrid
¶
HybridSerializer
¶
Bases: SerializationStrategy
Hybrid serialization strategy: Attempts JSON, falls back to pickle. This provides a balance of safety, portability, and fidelity.
SelectiveSnapshotCapture
¶
A utility for safely capturing snapshots of data for traces, with a configurable size limit to prevent storing excessively large objects.
capture(obj, max_size_bytes=None)
staticmethod
¶
Captures a snapshot of an object, respecting size limits.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
A dictionary indicating if the capture was successful, and either |
Dict[str, Any]
|
the serialized data or the reason for failure. |
Human Intervention¶
Tools for human-in-the-loop workflows.
clearstone.utils.intervention
¶
InterventionClient
¶
A simple client for handling Human-in-the-Loop (HITL) interventions. This default implementation uses command-line input/output.
request_intervention(decision)
¶
Logs a PAUSE decision as a pending intervention request.
wait_for_approval(intervention_id, prompt=None)
¶
Waits for a human to approve or reject a pending intervention via the CLI.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
intervention_id
|
str
|
The unique ID of the intervention to wait for. |
required |
prompt
|
str
|
The message to display to the user. |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if the action was approved, False otherwise. |
Telemetry¶
Anonymous usage statistics (opt-out available).
clearstone.utils.telemetry
¶
TelemetryManager
¶
Manages the collection and sending of anonymous, opt-out usage data. Designed to be transparent, respectful of privacy, and have zero performance impact.
record_event(event_name, payload)
¶
Records a telemetry event to be sent in the background.
get_telemetry_manager()
¶
Gets the global singleton TelemetryManager instance.
Path Utilities¶
File system path utilities.
clearstone.utils.paths
¶
Logging¶
Logging configuration and utilities.