Policy Authoring Guide
Learn how to create, configure, and deploy governance policies in Fulcrum to control your AI agents.
Overview
Fulcrum policies are rules that control how your AI agents operate. They can:
- Limit costs - Set spending caps per agent, model, or time period
- Control rate - Prevent runaway API calls with rate limits
- Filter content - Block harmful or sensitive content
- Require approval - Human-in-the-loop for high-risk actions
- Restrict models - Limit which AI models agents can use
Policies are evaluated in real-time with <10ms latency, so they don't slow down your agents.
Policy Types
Fulcrum supports five policy types:
| Type | Purpose | Example |
|---|---|---|
| Cost Limit | Cap spending per request or time period | Block requests over $0.10 |
| Rate Limit | Control request frequency | Max 60 requests/minute |
| Content Filter | Block sensitive content patterns | Block PII, prompt injections |
| Approval Required | Human review for specific actions | High-value transactions |
| Model Restriction | Limit which models can be used | GPT-4 only, no legacy models |
Creating Policies
Using the Dashboard (Recommended)
The easiest way to create policies is through the Fulcrum dashboard:
- Navigate to Policies in the sidebar
- Click Deploy New Policy or Templates for pre-built options
- Configure your policy rules
- Click Deploy to activate
Using Templates
Fulcrum includes ready-to-use templates for common scenarios:
Cost Management: - Standard Cost Cap ($0.10/request) - Aggressive Cost Cap ($0.02/request) - Daily Budget Guard ($100/day)
Security: - PII Protection Shield (blocks SSN, credit cards) - External Data Sharing Block - Code Injection Guard
Compliance: - High-Value Transaction Approval (>$1,000) - Legal Content Review - GDPR Data Processing
Performance: - Standard Rate Limit (60/min) - Burst Protection (100/min, 1000/hour)
Operational: - GPT-4 Only Policy - Cost-Efficient Models Only - Business Hours Only
Using the API
Create policies programmatically via the gRPC API:
import (
"context"
policyv1 "github.com/fulcrum-io/fulcrum/pkg/policy/v1"
)
policy := &policyv1.Policy{
PolicyId: "deny-bash",
Name: "Deny Bash Tool",
Status: policyv1.PolicyStatus_POLICY_STATUS_ACTIVE,
Rules: []*policyv1.PolicyRule{{
RuleId: "rule-1",
Priority: 100,
Conditions: []*policyv1.Condition{{
Field: "tool_name",
Operator: policyv1.ComparisonOperator_COMPARISON_OPERATOR_EQUALS,
Value: &policyv1.Condition_StringValue{StringValue: "bash"},
}},
Action: policyv1.PolicyAction_POLICY_ACTION_DENY,
}},
}
Using the Python SDK
from fulcrum_governance import FulcrumClient, Policy, PolicyRule, Condition
client = FulcrumClient(api_key="your-api-key")
policy = Policy(
name="Cost Cap Policy",
policy_type="cost_limit",
rules=[
PolicyRule(
conditions=[
Condition(
field="estimated_cost",
operator="greater_than",
value=0.10
)
],
action="deny"
)
]
)
client.policies.create(policy)
Policy Structure
Every policy consists of:
1. Metadata
policy_id: "pol_abc123" # Unique identifier
name: "My Policy" # Human-readable name
status: "active" # active, inactive, or draft
priority: 100 # Higher = evaluated first
2. Scope (Optional)
Define which agents, tools, or models the policy applies to:
scope:
agents: ["agent-1", "agent-2"] # Specific agents
tools: ["bash", "file_write"] # Specific tools
models: ["gpt-4o", "claude-3"] # Specific models
roles: ["developer", "analyst"] # User roles
If no scope is defined, the policy applies to all requests.
3. Rules
Rules define conditions and actions:
rules:
- rule_id: "rule-1"
priority: 100
conditions:
- field: "tool_name"
operator: "equals"
value: "bash"
action: "deny"
Condition Operators
| Operator | Description | Example |
|---|---|---|
equals |
Exact match | tool_name equals "bash" |
not_equals |
Not equal | user.role not_equals "admin" |
contains |
String contains | input_text contains "delete" |
not_contains |
String doesn't contain | input_text not_contains "safe" |
greater_than |
Numeric comparison | estimated_cost greater_than 0.10 |
less_than |
Numeric comparison | token_count less_than 1000 |
in_list |
Value in list | model in_list ["gpt-4", "gpt-4o"] |
not_in_list |
Value not in list | tool_name not_in_list ["bash", "exec"] |
regex_match |
Regular expression | input_text regex_match "\\d{3}-\\d{2}-\\d{4}" |
Composite Conditions
Combine conditions with AND/OR logic:
conditions:
- type: "and"
conditions:
- field: "tool_name"
operator: "equals"
value: "bash"
- field: "user.role"
operator: "not_equals"
value: "admin"
Policy Actions
| Action | Description | Use Case |
|---|---|---|
allow |
Permit the request | Default for non-matching rules |
deny |
Block the request | Security violations, cost limits |
require_approval |
Queue for human review | High-value transactions |
warn |
Log warning but allow | Soft limits, monitoring |
throttle |
Rate limit the request | Performance protection |
Evaluation Context
Policies evaluate against these context fields:
Request Context
tool_name- Name of the tool being invokedinput_text- Input text to the AI modeloutput_text- Output text from the AI modelmodel- AI model being used
Cost Context
estimated_cost- Predicted cost of the requestactual_cost- Actual cost (for post-execution policies)daily_spend- Total spend todaymonthly_spend- Total spend this month
User Context
user.id- User identifieruser.role- User's roleuser.email- User's email
Agent Context
agent.id- Agent identifieragent.name- Agent nametenant_id- Organization identifier
Best Practices
1. Start with Templates
Use the built-in templates as a starting point. They're battle-tested and cover common scenarios.
2. Use Meaningful Names
Good: "Block PII in Customer Support Agent" Bad: "Policy 1"
3. Set Appropriate Priorities
- 100-199: Critical security policies (run first)
- 200-299: Cost and rate limiting
- 300-399: Compliance policies
- 400+: General operational policies
4. Test Before Deploying
Use the policy evaluator to test against sample inputs:
5. Monitor Policy Performance
Check the Policies page in the dashboard to see: - Violation counts - Total evaluations - Last trigger time
6. Use Scopes Wisely
Start with broader policies and narrow scope only when needed. Over-scoping creates maintenance burden.
Semantic Judge Integration
For complex content analysis, enable the Semantic Judge - a local LLM that analyzes intent:
rules:
- rule_id: "semantic-check"
conditions:
- field: "semantic_intent"
operator: "equals"
value: "harmful"
action: "deny"
semantic_judge:
enabled: true
model: "llama3.2"
confidence_threshold: 0.85
The Semantic Judge can detect: - Prompt injection attempts - Harmful intent disguised as benign requests - Policy circumvention attempts
Troubleshooting
Policy Not Triggering
- Check policy status is
active - Verify scope matches the agent/tool
- Check priority - higher priority policies may be matching first
- Review evaluation logs in the dashboard
Too Many False Positives
- Narrow the scope to specific agents or tools
- Adjust condition operators (e.g.,
containsvsequals) - Add exceptions for admin users
- Lower the Semantic Judge confidence threshold
Performance Issues
- Simplify complex composite conditions
- Use
in_listinstead of multipleequalsconditions - Disable Semantic Judge for low-risk policies
Related Documentation
- API Reference - Policy API endpoints
- SDK Overview - Client library documentation
- Dashboard Guide - Using the dashboard
- Policy Engine Architecture - Technical deep-dive
Last Updated: January 20, 2026