Integrations

Drop-in security for every major AI framework. Each integration wraps your existing code transparently — same API, same types, with security added automatically.

LangChain / LangGraph

The LangChain middleware intercepts all tool calls and LLM calls, scanning inputs/outputs and enforcing policies.

langchain_agent.pypython
from rune import Shield
from rune.integrations.langchain import ShieldMiddleware
from langchain.agents import create_react_agent

shield = Shield(api_key="rune_live_xxx")
middleware = ShieldMiddleware(
    shield,
    agent_id="research-agent",
    agent_tags=["research", "prod"],
)

# Pass middleware to agent creation
agent = create_react_agent(model, tools, middleware=[middleware])

# All tool calls are now scanned and policy-checked automatically
result = agent.invoke({"input": "Find revenue data for Q4"})

The middleware hooks into LangChain's native extension points. No code changes to your tools or prompts.

OpenAI

Wraps the OpenAI client so all tool calls in responses are intercepted before execution, and all tool results are scanned before sending back to the LLM.

openai_agent.pypython
from openai import OpenAI
from rune import Shield
from rune.integrations.openai import shield_client

shield = Shield(api_key="rune_live_xxx")

# Wrap the client — transparent, same API
client = shield_client(
    OpenAI(),
    shield=shield,
    agent_id="support-agent",
)

# Use exactly as before
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Help me reset my password"}],
    tools=[...],
)

Anthropic

Same transparent wrapper pattern for the Anthropic client.

anthropic_agent.pypython
from anthropic import Anthropic
from rune import Shield
from rune.integrations.anthropic import shield_client

shield = Shield(api_key="rune_live_xxx")

client = shield_client(
    Anthropic(),
    shield=shield,
    agent_id="analysis-agent",
)

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Analyze this dataset"}],
)

CrewAI

Wraps a CrewAI crew to scan crew-level inputs/outputs and intercept individual tool calls from all agents in the crew.

crewai_pipeline.pypython
from crewai import Agent, Task, Crew
from rune import Shield
from rune.integrations.crewai import shield_crew

shield = Shield(api_key="rune_live_xxx")

# Create your crew as usual
crew = Crew(agents=[researcher, writer], tasks=[research_task, write_task])

# Wrap it with Rune
protected_crew = shield_crew(
    crew,
    shield=shield,
    agent_id="content-crew",
)

# All tool calls from all agents are now monitored
result = protected_crew.kickoff(inputs={"topic": "AI safety trends"})

MCP Server — Use Rune as Agent Tools

Expose Rune's security tools as MCP tools that any agent can call directly. Works with Claude Code, Cursor, Windsurf, and any MCP client.

Terminalbash
pip install "runesec[mcp]"
Claude Code / Cursor configjson
{
  "mcpServers": {
    "rune": {
      "command": "rune-mcp",
      "env": { "RUNE_API_KEY": "rune_live_xxx" }
    }
  }
}

Provides 9 tools: scan_input, scan_output, redact, validate_policy, list_agents, list_alerts, update_alert, list_policies, create_policy. Local tools work without an API key.

MCP Proxy — Secure Existing Servers

The MCP proxy sits between your MCP client and upstream MCP servers, scanning all tool calls and responses passing through.

mcp_proxy.pypython
from rune import Shield
from rune.integrations.mcp import ShieldMCPProxy

shield = Shield(api_key="rune_live_xxx")

proxy = ShieldMCPProxy(
    shield=shield,
    agent_id="mcp-proxy",
)

# Add upstream MCP servers
proxy.add_server("filesystem", command="npx @modelcontextprotocol/server-filesystem /tmp")
proxy.add_server("github", command="npx @modelcontextprotocol/server-github")

# Start the proxy — it acts as an MCP server itself
await proxy.start()

The proxy transparently passes through tool listings and calls, adding security scanning at the boundary. Supports both stdio and SSE transports.

Environment Variables

All integrations respect these environment variables as fallbacks:

.envbash
RUNE_API_KEY=rune_live_xxx        # API key (used if not passed to Shield)
RUNE_AGENT_ID=my-agent            # Default agent ID
RUNE_ENDPOINT=https://...         # Custom API endpoint

Framework Guides

Step-by-step setup, threat-specific advice, and policy examples for each framework:

Next Steps