Blog

Thoughts, updates, and insights from the Superagent team.

SecurityFebruary 18, 20265 min read

The Cline Incidents and the Broken Security Model

Two Cline security incidents in two months expose the same underlying problem: AI agents treat untrusted content as instructions. The npm supply chain and prompt injection attacks reveal why the current security model is fundamentally broken.

Read more
AnnouncementsFebruary 17, 20262 min read

Launching brin.sh - safe packages for coding agents

brin is an agent-native package gateway. It blocks bad packages before install and generates safe usage docs.

Read more
SecurityJanuary 25, 20264 min read

What Can Go Wrong with AI Agents

AI agents fail in ways traditional software doesn't. Data leaks, compliance violations, unauthorized actions. Here's what to watch for.

Read more
ResearchJanuary 21, 20263 min read

We Bypassed Grok Imagine's NSFW Filters With Artistic Framing

Text-to-image safety is broken. We generated explicit content of a real person using basic compositional tricks. Here's what we found, why it worked, and what this means for AI safety systems.

Read more
BenchmarksJanuary 16, 202612 min read

AI Code Sandbox Benchmark 2026: Modal vs E2B vs Daytona vs Cloudflare vs Vercel vs Beam vs Blaxel

We evaluate seven leading AI code sandbox providers across developer experience and pricing to help you choose the right environment for executing AI-generated code.

Read more
ResearchJanuary 13, 20265 min read

The Threat Model for Coding Agents is Backwards

Most people think about AI security wrong. They imagine a user trying to jailbreak the model. With coding agents, the user is the victim, not the attacker.

Read more
Next

Join our newsletter

We'll share announcements and content regarding AI safety.