Prompt injection, data exfiltration, tool abuse — these attacks happen at runtime, not in your test suite. Rune scans every LLM input and tool call in real time. Open-source SDK, under 10ms overhead, 3 lines of code.
Free plan · 10K events/mo · No credit card required
from rune import Shield
shield = Shield(api_key="rune_live_...")
# Wrap any tool call — inputs blocked,
# outputs scanned automatically
@shield.protect(agent_id="my-agent")
async def call_tool(name, params):
return await agent.run(name, **params)
# Or scan manually:
result = shield.scan_input(user_message)
if result.blocked:
print(f"Threat: {result.threat_type}")
# ✓ Inputs blocked before execution
# ✓ Outputs scanned for data leaks
# ✓ Anomalies flagged in real timeInstall: pip install runesec
Lakera → Check Point. Protect AI → Palo Alto. Promptfoo → OpenAI. Rune is still independent.
Paste any text and watch the scanner detect threats in real time. Same engine that protects your agents in production.
Scan results will appear here
Add Rune to your existing agent code. No refactoring, no new abstractions.
Pattern-based rules catch prompt injections, data exfiltration, and command injection before your agent can act on them.
Semantic analysis detects obfuscated prompts, encoded payloads, and techniques that haven't been seen before — beyond what regex can catch.
Know exactly what every agent is doing — and what Rune stopped it from doing. Event timelines, anomaly detection, and alert routing built in.
Define which tools each agent can call, with what arguments, under what conditions. YAML policies checked on every event, automatically.
Unusual call frequency, new tool combinations, sudden risk score spikes — Rune flags deviations from established agent patterns.
10K events on the free plan. Upgrade for more agents, deeper scanning, or longer retention. No surprise bills. No credit card to start.
Get started with up to 5 agents, free forever
For small teams shipping their first agents to production
For teams running production agents with full scanning
For companies with high-volume agent deployments
Usage-based pricing for unpredictable workloads
No included events — pay only for what you use. Includes 90 days retention.
All paid plans include overage pricing — never get cut off mid-month. Need higher limits or a custom contract? Contact us at hello@runesec.dev
Under 10 minutes. Install the SDK, create a Shield with your API key, wrap your agent. Three lines of code for most frameworks.
Rune works with OpenAI SDK, Anthropic SDK, CrewAI, LangChain, and MCP out of the box. The SDK is framework-agnostic — if your agent makes tool calls, Rune can intercept them.
L1 scanning adds under 5ms per call using regex pattern matching. L2 semantic analysis adds under 30ms (Starter plan and above). L3 LLM-based analysis runs asynchronously so it doesn't block your agent (Pro plan and above).
Yes. The policy editor includes a built-in test panel where you can simulate actions against your YAML policies and see the result before anything goes live.
For inputs: the tool call is blocked before it executes. For outputs: the response is flagged after execution and an alert is created. In both cases, an alert appears in your dashboard with the agent, event, triggering policy, and severity rating. You can route alerts to email, Slack, or webhooks.
No. Rune wraps your existing agent as middleware. Your logic, prompts, and tool definitions stay exactly the same.
L1 uses regex pattern matching for known threats — fast and deterministic (all plans). L2 uses vector similarity to catch novel attacks that don't match known patterns (Starter+). L3 uses an LLM judge to evaluate ambiguous threats with full context (Pro+). Higher tiers auto-enable when you connect with a paid plan API key.
Run Rune in dry-run or monitor mode in your test suite. It scans agent interactions during integration tests and catches issues before they reach production — without blocking your pipeline.
Yes. Run Rune in monitor mode in staging to observe threats without blocking, then switch to enforce mode in production. You can configure different modes per environment.
L1 scanning is deterministic pattern matching — zero false positives on known attack patterns. L2 semantic analysis has a configurable confidence threshold you can tune. L3 LLM-based analysis creates alerts for human review rather than auto-blocking.
Event metadata only — agent ID, threat type, severity, action taken, and timestamps. Content is scanned in transit and not persisted. We never train on customer data.
Every competitor got acquired. We're still shipping for developers. Three lines of code. No enterprise sales call required.
Free plan. No credit card.