Cognitive Layer Setup Guide
Powering Adaptive AI Governance with Local Intelligence
Fulcrum's cognitive layer (Semantic Judge, Oracle, and Immune System) requires a local LLM runtime to perform intent analysis and predictive modeling without compromising data privacy.
🛠 Prerequisites
Ollama Installation
The cognitive layer is built to work with Ollama, an open-source local LLM runner.
- Download Ollama: Visit ollama.com and install for your platform.
- Pull Required Model: Fulcrum defaults to the
llama3.2model. - Start Ollama: Ensure the Ollama server is running (default port:
11434).
⚙️ Configuration
Fulcrum services that interact with the cognitive layer (primarily the Brain and Policy services) use the following configuration structure:
# example config.yaml snippet
ollama:
host: "http://localhost:11434"
model: "llama3.2"
timeout: 30s
validate_responses: true
# Security: Required if using a reverse proxy for authentication
auth_header: ""
auth_value: ""
Environment Variables
| Variable | Description | Default |
|---|---|---|
OLLAMA_HOST |
URL of the Ollama server | http://localhost:11434 |
OLLAMA_MODEL |
Model name to pull and use | llama3.2 |
OLLAMA_TIMEOUT |
Request timeout | 30s |
🔒 Security Best Practices
CVE-2025-63389 Mitigation
Ollama lacks native authentication. If you are running Ollama on a remote host or in a shared environment, you must implement a reverse proxy (like Nginx or Caddy) with authentication.
- Reverse Proxy: Configure a proxy to require a Bearer token or Basic Auth.
- Fulcrum Config: Provide the
auth_headerandauth_valuein your Fulcrum configuration. - Validation: Enable
validate_responsesto ensure responses haven't been tampered with or redirected to a different model.
Network Isolation
It is highly recommended to run Ollama on the same machine as the Fulcrum services (localhost) or within a private network with strict firewall rules blocking port 11434 from external access.
🚀 Verification
To verify your cognitive layer setup:
- Check Ollama status:
- Run Fulcrum health check:
- Verify Semantic Evaluation in logs:
Look for
semantic evaluation completedmessages in the Policy Service logs.
Last Updated: January 14, 2026