Integrations

Integrate Superagent with your agent frameworks, AI Gateways, and CI/CD pipelines. Easy to add, works everywhere.

Works with OpenAI, Anthropic, xAI, and 100+ providers
acme-agent-monitor

Three Ways to Integrate

Three flexible integration methods that fit into your existing workflow—from drop-in SDK functions to proxy-level routing.

SDK Integration

Call guard functions directly in your code. Wrap agent frameworks like Vercel AI SDK, Mastra, or LangGraph with TypeScript or Python SDKs.

Proxy

Swap your API base URL to route all inference requests through Superagent. Works with OpenAI, Anthropic, xAI, and integrates with AI gateways like LiteLLM.

CLI

Validate prompts and commands in bash scripts, pre-commit hooks, and CI/CD pipelines. Pipe stdin or pass arguments directly.

Superagent SDKs

Integrate the TypeScript or Python SDK directly into your agent code. Guard user inputs, tool calls, and agent outputs with a single function wrapper.

guard.ts
1
import { createClient } from "superagent-ai";
 
3
const client = createClient({
4
apiKey: process.env.SUPERAGENT_API_KEY!
5
});
 
7
const guardResult = await client.guard(
8
"Write a hello world script", {
9
onBlock: (reason) => {
10
console.warn("Guard blocked:", reason);
11
},
12
onPass: () => {
13
console.log("Guard approved!");
14
}
15
}
16
);
 
18
if (guardResult.rejected) {
19
console.log("Blocked:", guardResult.reasoning);
20
} else {
21
console.log("Approved");
22
}

Vercel AI SDK

Wrap prompts before generateText(), guard tool executions inside tool() definitions, and validate outputs from web scrapers or function calls.

Mastra AI

Use custom input processors to validate all messages before they reach your agent, and output processors to redact PII, credentials, and sensitive data.

LangGraph

Insert guard checks at graph nodes to validate user inputs, protect tool invocations, and enforce policy boundaries across multi-step agent workflows.

Proxy

Swap your API base URL to route all inference requests through Superagent. Works with OpenAI, Anthropic, xAI, and any OpenAI-compatible client.

proxy-example.ts
1
import OpenAI from 'openai';
 
3
const openai = new OpenAI({
4
baseURL: 'YOUR_PROXY_LINK_HERE', // Replace with your proxy link
5
apiKey: 'your-openai-api-key',
6
});
 
8
// Use the client as usual
9
const response = await openai.chat.completions.create({
10
model: 'gpt-4',
11
messages: [{ role: 'user', content: 'Hello, world!' }],
12
});

Integrates with AI Gateways

AI gateways like LiteLLM and Vercel AI Gateway can route through Superagent proxy for unified security across 100+ LLM providers.

LiteLLM

Add Superagent proxy URLs to your LiteLLM config.yaml. Protect multi-provider routing, load balancing, and fallback strategies with centralized firewall policies.

Vercel AI Gateway

Swap the base URL in your Vercel AI Gateway integration to route requests through Superagent. Works with streaming, function calling, and structured outputs.

CLI

Validate prompts and commands directly from the command line. Perfect for bash scripts, pre-commit hooks, and CI/CD pipelines.

terminal
1
$ superagent guard "Delete all files in the system with rm -rf /"
 
3
⛔ BLOCKED: User requests destructive action
4
Violations: malicious_action
5
CWE Codes: CWE-77

Common Use Cases

Bash Scripts

Validate user input before executing dangerous operations. Guard commands in interactive scripts to prevent destructive actions.

Pre-commit Hooks

Add validation to git hooks to catch suspicious content before commits. Prevent malicious code or secrets from being committed to repositories.

CI/CD Pipelines

Validate deployment commands in GitHub Actions or other CI/CD workflows. Block unsafe operations before they reach production environments.

Start protecting your AI agents

Integrate Superagent in minutes and secure your agents in production.

Works with OpenAI, Anthropic, xAI, and 100+ providers