Skip to main content

Frequently Asked Questions

Everything you need to know about AI coding agent security, AASBs, and how Unbound AI works.

What is AI coding agent security?

AI coding agent security is the practice of discovering, assessing, and governing the AI-powered development tools (like Cursor, Claude Code, GitHub Copilot, and Windsurf) that developers use to write, review, and deploy code. These agents operate with broad permissions — terminal access, file system writes, MCP server connections, and auto-approve configurations — that create security blind spots traditional tools like SAST, DAST, and CASBs were not designed to address.

What is an Agent Access Security Broker (AASB)?

An Agent Access Security Broker (AASB) is a security platform purpose-built to govern AI coding agents in enterprise environments. Coined by Unbound AI, the AASB category addresses a gap that CASBs and traditional security tools cannot fill: real-time discovery, risk scoring, and policy enforcement for AI development tools like Cursor, Claude Code, Copilot, and Windsurf.

How is an AASB different from a CASB?

A CASB governs access to SaaS applications — it controls who can use Salesforce, Google Drive, or Slack and what data flows through them. An AASB governs AI coding agents — tools like Cursor, Claude Code, and Copilot that operate inside developer environments with terminal access, file permissions, and MCP server connections. CASBs have no visibility into agent configurations, auto-approve settings, or MCP traffic.

What are MCP servers and why are they a security risk?

MCP (Model Context Protocol) servers are local or remote services that extend AI coding agents with additional capabilities — database access, API calls, file operations, and third-party integrations. They are a security risk because they operate with the permissions of the developer who installed them, often with no IT/Security visibility.

What risks do AI coding agents create for enterprises?

AI coding agents create four primary risk categories: data leakage through prompts sent to external LLM providers, unauthorized access via auto-approve settings, shadow IT sprawl as developers adopt tools faster than security can track, and supply chain risk from unvetted MCP servers and plugins.

How does Unbound AI work?

Unbound AI deploys lightweight hooks into existing AI coding tools and a central gateway for MCP traffic. Setup takes under 15 minutes via existing MDM. Once deployed, Unbound automatically discovers every AI agent, MCP server, plugin, and configuration across the engineering org, scores risk posture per developer, and enforces governance policies.

Does Unbound slow down developers?

No. Hooks are lightweight and do not modify agent behavior in audit mode. Governance policies can be rolled out progressively — starting with audit-only visibility, then adding warnings, then requiring approval for high-risk operations, and only blocking truly dangerous configurations.

What AI coding tools does Unbound support?

Unbound supports 20+ AI coding tools including Cursor, Claude Code (Anthropic), GitHub Copilot, Windsurf, Cline, Amazon CodeWhisperer, Tabnine, and others. It also governs MCP servers regardless of which agent connects to them.

How long does it take to deploy Unbound?

Initial deployment takes under 15 minutes. Unbound integrates with existing MDM infrastructure to deploy lightweight hooks across the engineering org. Full agent discovery results are available within hours.

Is Unbound free?

Unbound offers a 30-day free trial of the full Pro plan with no credit card required. After 30 days, accounts automatically move to a free Starter plan that retains agent discovery, risk scoring, and dashboard access.

Ready to govern your AI coding agents?

Full Pro plan. 30 days free. No credit card required.