LangChain
Complete security guide for LangChain agents. Prevent prompt injection in RAG pipelines, secure tool calls, and add runtime protection to LangGraph workflows with working code examples.
Key vulnerability: RAG Document Poisoning
Security Guides
Each guide starts from the framework's own architecture — where input enters, how tools get called, how the agent loop terminates — and identifies the specific places runtime scanning needs to sit. Secure vs. vulnerable code, checklist, and a ready-to-ship policy template.
Complete security guide for LangChain agents. Prevent prompt injection in RAG pipelines, secure tool calls, and add runtime protection to LangGraph workflows with working code examples.
Key vulnerability: RAG Document Poisoning
Definitive security guide for OpenAI API agents with function calling. Prevent parameter injection, secure the Assistants API, protect multi-function chains, and add runtime security with working code.
Key vulnerability: Function Parameter Injection
Definitive security guide for Anthropic Claude agents with tool use. Protect against long-context injection, secure tool_use blocks, monitor multi-turn conversations, and add runtime protection with working code.
Key vulnerability: Long-Context Hidden Injection
Security guide for CrewAI multi-agent systems. Prevent inter-agent escalation, secure tool chains, and protect crew workflows from cascading attacks with working code examples.
Key vulnerability: Inter-Agent Escalation
Security guide for Model Context Protocol (MCP) servers. Protect against malicious servers, verify tool integrity, enforce policies on MCP tool calls, and add a security proxy with working examples.
Key vulnerability: Malicious MCP Server Responses
Security guide for LlamaIndex RAG pipelines. Protect against index poisoning, secure query engines, and add runtime scanning to your retrieval-augmented generation stack.
Key vulnerability: Index Poisoning
Security guide for Microsoft AutoGen multi-agent systems. Protect agent conversations, secure code execution, and prevent inter-agent manipulation.
Key vulnerability: Conversational Agent Manipulation
Security guide for DSPy programs and optimized prompts. Protect against injection in compiled programs, secure retrieval modules, and validate optimized signatures.
Key vulnerability: Optimized Prompt Exploitation
Generic security catches common patterns but misses framework-specific attack vectors. For example, LangChain's RAG pipeline creates document poisoning risks that don't exist in direct OpenAI SDK usage. CrewAI's inter-agent communication creates lateral movement risks unique to multi-agent systems. Framework-specific guides address these gaps.
Frameworks with more tool access and autonomy have larger attack surfaces. Multi-agent frameworks like CrewAI and MCP-based systems have the most complex security requirements because compromising one agent can cascade to others. Single-agent frameworks like direct OpenAI or Anthropic SDK usage have smaller but still significant attack surfaces around tool calling and output handling.
Rune's generic Shield class works with any Python-based agent. Wrap your agent's input/output processing with shield.scan_input() and shield.scan_output(). The framework-specific integrations add deeper hooks (tool call interception, middleware chains) but the core scanning works universally.
Pick your framework, install, wrap.
10,000 events/month free. Three lines of code.