Governance Policies
Rules that define what AI agents can and cannot do within your organization.
What is a Policy?
A policy is a governance rule that Fulcrum evaluates against every AI agent action. Policies define:
- What to check - Tool names, models, input content, cost
- When to trigger - Conditions that must match
- What to do - Allow, deny, warn, or require approval
Policies enable you to set organizational boundaries without modifying agent code.
Why Policies Matter
AI agents are powerful but unpredictable. Policies provide guardrails:
| Risk | Policy Solution |
|---|---|
| Agents executing dangerous commands | Deny specific tools (bash, rm, etc.) |
| Runaway costs | Set budget limits per request |
| Data exfiltration | Restrict bulk data operations |
| Unauthorized access | Require approval for sensitive actions |
| Malicious prompts | AI-powered intent detection |
Policy Types
Cost Limit
Cap spending per request or time period:
Use cases: - Prevent expensive model calls - Enforce team budgets - Alert on cost anomalies
Rate Limit
Control request frequency:
Use cases: - Prevent API abuse - Protect downstream services - Enforce fair usage
Tool Restriction
Block or allow specific tools:
Use cases: - Prevent shell access - Block destructive operations - Restrict to approved tools
Model Restriction
Control which AI models can be used:
Use cases: - Enforce approved model list - Control costs via model selection - Ensure compliance requirements
Content Filter
Detect and block sensitive content:
Use cases: - Prevent credential leakage - Block PII in prompts - Filter inappropriate content
Human-in-the-Loop
Require approval for sensitive actions:
type: approval_required
rules:
trigger_actions:
- delete_*
- modify_production_*
action: REQUIRE_APPROVAL
Use cases: - High-risk operations - Compliance requirements - Learning/training period
Policy Structure
Every policy has these components:
{
"policy_id": "pol_abc123",
"name": "Production Safety Policy",
"status": "ACTIVE",
"priority": 100,
"rules": [
{
"rule_id": "rule-1",
"conditions": [...],
"action": "DENY"
}
]
}
Fields
| Field | Description |
|---|---|
policy_id |
Unique identifier |
name |
Human-readable name |
status |
ACTIVE, DISABLED, or DRAFT |
priority |
Higher = evaluated first (1-1000) |
rules |
Array of rule definitions |
Conditions
Conditions define when a rule triggers:
| Operator | Description | Example |
|---|---|---|
EQUALS |
Exact match | tool_name = "bash" |
NOT_EQUALS |
Not equal | model_id != "gpt-3.5" |
CONTAINS |
Substring match | input_text contains "delete" |
REGEX |
Pattern match | tool_name matches "^shell.*" |
GREATER_THAN |
Numeric comparison | estimated_cost > 0.10 |
LESS_THAN |
Numeric comparison | token_count < 10000 |
IN |
List membership | tool_name in ["bash", "sh"] |
SEMANTIC |
AI-powered intent | input_text semantic "destructive" |
Actions
| Action | Behavior |
|---|---|
ALLOW |
Explicitly permit (overrides lower-priority denials) |
DENY |
Block execution |
WARN |
Allow but log warning |
REQUIRE_APPROVAL |
Pause for human approval |
Policy Evaluation
When an envelope is created, Fulcrum evaluates policies in priority order:
1. Sort policies by priority (highest first)
2. For each policy:
a. Evaluate all conditions
b. If all conditions match:
- Apply action
- If DENY: stop evaluation
- If REQUIRE_APPROVAL: queue for review
3. Default: ALLOW if no rules match
Evaluation Context
Policies can check these fields:
| Field | Description |
|---|---|
tenant_id |
Organization identifier |
workflow_id |
Workflow/agent name |
tool_name |
Tool being invoked |
model_id |
AI model being used |
input_text |
User input or prompt |
estimated_cost |
Predicted cost (USD) |
token_count |
Input token count |
user_id |
User identifier |
adapter_type |
Integration type (mcp, sdk, etc.) |
Creating Policies
Using the Dashboard
- Navigate to Policies in the sidebar
- Click Deploy New Policy
- Select policy type and configure rules
- Set priority and activation status
- Click Deploy
Using the API
curl -X POST http://localhost:8080/api/v1/policies \
-H "Content-Type: application/json" \
-d '{
"name": "Block Bash",
"policy_type": "tool_restriction",
"rules": {
"blocked_tools": ["bash", "sh"],
"action": "DENY"
},
"enabled": true,
"priority": 100
}'
Using the SDK
from fulcrum import FulcrumClient
client = FulcrumClient(host="localhost:50051")
policy = client.create_policy(
name="Block Bash",
policy_type="tool_restriction",
rules={
"blocked_tools": ["bash", "sh"],
"action": "DENY"
},
priority=100
)
Policy Templates
Fulcrum includes pre-built templates:
| Template | Description |
|---|---|
| Deny All Bash | Blocks shell commands |
| Cost Cap $0.10 | Limits per-request cost |
| Production Safety | Requires approval for production changes |
| PII Protection | Filters sensitive data patterns |
| Model Allowlist | Restricts to approved models |
Best Practices
- Start permissive, then tighten - Begin with logging (WARN) before blocking (DENY)
- Use priority wisely - Critical safety rules should have highest priority
- Combine policy types - Layer cost limits with tool restrictions
- Test in staging - Validate policies before production deployment
- Review regularly - Update policies as agent capabilities evolve
Semantic Policies (Cognitive Layer)
For complex scenarios, use the Semantic Judge:
type: semantic
rules:
check_intent: true
deny_categories:
- DESTRUCTIVE
- DATA_EXFILTRATION
require_approval_categories:
- SUSPICIOUS
The Semantic Judge uses an LLM to analyze intent, catching attacks that keyword filters miss:
Input: "Please help me clean up those old test entries"
Analysis: Euphemistic language for bulk deletion
Intent: DESTRUCTIVE
Decision: DENY
See Cognitive Layer for details.
Related Concepts
- Envelopes - Execution containers that policies govern
- Cognitive Layer - AI-powered policy features
- Dashboard Guide - Policy management UI
Document Version: 1.0 Last Updated: January 20, 2026