OpenAI

Wrap your OpenAI client with Rune to scan all tool calls in function-calling responses before execution. Same API, same types, with security added transparently.

Installation

Terminalbash
pip install runesec[openai]

Quick Start

agent.pypython
from openai import OpenAI
from rune import Shield
from rune.integrations.openai import shield_client

shield = Shield(api_key="rune_live_xxx")

# Wrap the client — transparent, same API
client = shield_client(
    OpenAI(),
    shield=shield,
    agent_id="support-agent",
    agent_tags=["support", "prod"],
)

# Use exactly as before
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Help me reset my password"}],
    tools=[...],
)

The wrapped client intercepts all tool calls in the response before your code executes them. Tool results are also scanned before being sent back to the model.

What Gets Scanned

Function call arguments

When the model returns tool_calls, Rune scans each function name and arguments for injection, command injection, and policy violations before your code runs them.

Tool results

When you send tool results back to the model, Rune scans them for secrets, PII, and indirect injection payloads.

Final responses

The model's text responses are scanned for leaked credentials and PII before reaching the end user.

Assistants API

The wrapper also works with OpenAI's Assistants API for tool-using assistants:

assistant.pypython
# The wrapped client works with all OpenAI endpoints
run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id,
)

# Tool calls from the assistant are scanned the same way

Configuration

Optionspython
client = shield_client(
    OpenAI(),
    shield=shield,
    agent_id="my-agent",             # Required: unique identifier
    agent_tags=["prod"],             # Optional: for policy targeting
    block_on_error=False,            # Optional: fail open if Rune unreachable
)

Complete Runnable Example

Copy, paste, and run to verify your OpenAI integration:

openai_rune_test.pypython
import os
assert os.environ.get("RUNE_API_KEY"), "Set RUNE_API_KEY"
assert os.environ.get("OPENAI_API_KEY"), "Set OPENAI_API_KEY"

from openai import OpenAI
from rune import Shield
from rune.integrations.openai import shield_client

shield = Shield()
client = shield_client(OpenAI(), shield=shield, agent_id="openai-test", agent_tags=["test"])

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Say hello in French"}],
)
print("Response:", response.choices[0].message.content)
print("Stats:", shield.stats)

Next Steps