Skip to content

Cognitive Layer Setup Guide

Powering Adaptive AI Governance with Local Intelligence

Fulcrum's cognitive layer (Semantic Judge, Oracle, and Immune System) requires a local LLM runtime to perform intent analysis and predictive modeling without compromising data privacy.


🛠 Prerequisites

Ollama Installation

The cognitive layer is built to work with Ollama, an open-source local LLM runner.

  1. Download Ollama: Visit ollama.com and install for your platform.
  2. Pull Required Model: Fulcrum defaults to the llama3.2 model.
    ollama pull llama3.2
    
  3. Start Ollama: Ensure the Ollama server is running (default port: 11434).

⚙️ Configuration

Fulcrum services that interact with the cognitive layer (primarily the Brain and Policy services) use the following configuration structure:

# example config.yaml snippet
ollama:
  host: "http://localhost:11434"
  model: "llama3.2"
  timeout: 30s
  validate_responses: true
  # Security: Required if using a reverse proxy for authentication
  auth_header: ""
  auth_value: ""

Environment Variables

Variable Description Default
OLLAMA_HOST URL of the Ollama server http://localhost:11434
OLLAMA_MODEL Model name to pull and use llama3.2
OLLAMA_TIMEOUT Request timeout 30s

🔒 Security Best Practices

CVE-2025-63389 Mitigation

Ollama lacks native authentication. If you are running Ollama on a remote host or in a shared environment, you must implement a reverse proxy (like Nginx or Caddy) with authentication.

  1. Reverse Proxy: Configure a proxy to require a Bearer token or Basic Auth.
  2. Fulcrum Config: Provide the auth_header and auth_value in your Fulcrum configuration.
  3. Validation: Enable validate_responses to ensure responses haven't been tampered with or redirected to a different model.

Network Isolation

It is highly recommended to run Ollama on the same machine as the Fulcrum services (localhost) or within a private network with strict firewall rules blocking port 11434 from external access.


🚀 Verification

To verify your cognitive layer setup:

  1. Check Ollama status:
    curl http://localhost:11434/api/tags
    
  2. Run Fulcrum health check:
    fulcrum health
    
  3. Verify Semantic Evaluation in logs: Look for semantic evaluation completed messages in the Policy Service logs.

Last Updated: January 14, 2026