Skip to main content

Reference

Coding Agent Security Glossary

Plain-English definitions for the acronyms, concepts, controls, and risk terms behind Agent Access Security Brokers (AASB) and secure AI coding agent adoption.

By Raj Srinivasan, Co-Founder & CEO ยท Last updated April 20, 2026

Most Useful Acronyms

The six terms most readers need first.

AASB

Security and governance layer for AI coding agents.

MCP

Protocol that connects agents to tools and data sources.

HITL

Human approval step for high-impact actions.

JIT

Short-lived access granted only when needed.

RCE

Code or commands executed in unsafe ways.

DLP

Controls that stop sensitive data from leaving approved paths.

How the Stack Fits Together

AASB sits across this path to discover the agent estate, assess risk, and enforce policy over terminal commands, MCP actions, sensitive data movement, and high-impact approvals.

01

Developer request

A person asks for code, a fix, or a workflow.

02

AI coding agent

The agent plans steps and decides what to do next.

03

Tools & connectivity

Terminal, files, MCP servers, APIs, and other tools.

04

Systems & data

Source code, infra, tickets, databases, and business data.

AASB sits across this entire path โ€” providing Discover, Assess, and Enforce at every layer.

Core Category & Architecture

Terms that explain the AASB category and why older controls don't fully cover agentic development.

AASB (Agent Access Security Broker)

The governance layer between AI coding agents and the tools, systems, files, and data they can reach. It gives teams centralized visibility, risk analysis, and policy enforcement over actions like terminal commands, MCP calls, and sensitive data access.

AI coding agent

An AI tool that goes beyond autocomplete and can edit files, run commands, call tools, or take multi-step actions. The key difference is agency: it can do work, not just suggest work.

Agentic AI

AI systems that autonomously plan, decide, and execute multi-step tasks with minimal human intervention. In the development context, agentic AI refers to coding tools that go beyond suggestion to take direct action: editing files, running shell commands, connecting to MCP servers, and interacting with infrastructure. The shift from conversational AI to agentic AI is what created the need for the AASB category.

Agentic system

A system where one or more AI agents plan, decide, and act toward a goal across multiple steps. In software development, that can include code changes, tool use, and external system access.

CASB (Cloud Access Security Broker)

A cloud-era security layer that governs employee use of SaaS apps and cloud data movement. CASB still matters, but it was not designed to control live terminal access or MCP actions inside coding workflows.

Control plane

The central layer where security and engineering teams set policy, view activity, and enforce boundaries across many tools. In the AASB model, the control plane sits above individual coding agents rather than inside one IDE.

Runtime governance

Security that applies while the agent is working, not just after code is committed or scanned. This matters because agents can take impactful actions in real time.

Agent observability

The ability to see what AI coding agents are doing in real time: which tools they invoke, which files they read, which MCP servers they connect to, which commands they execute, and what data they send to model providers. Agent observability is the foundation of AASB governance because you cannot enforce policy over actions you cannot see.

Risk posture

The overall exposure created by an agent's permissions, settings, connected tools, and behavior. Two teams using the same agent can have very different risk postures depending on how it is configured.

OWASP Agentic Top 10 / ASI

OWASP's risk framework for agentic applications. It names issues such as tool misuse, privilege abuse, memory poisoning, unexpected code execution, and rogue agents.

How Agents Connect & Act

Where AI coding agents work and how they interact with tools, systems, and other agents.

IDE (Integrated Development Environment)

The editor where developers write and review code. Many AI coding agents live inside the IDE, which is why governance has to fit the developer workflow instead of fighting it.

CLI and terminal

The command-line environment where agents or developers run commands. This is one of the highest-risk surfaces because a single command can alter systems, data, or infrastructure.

Claude Code

Anthropic's CLI-based AI coding agent that runs directly in the developer's terminal with full shell permissions, file system access, and native MCP support. Claude Code communicates directly with Anthropic's API with no intermediary proxy. It is the highest-capability coding agent and has the broadest attack surface, which is why it is a primary governance target for AASB.

Cursor

An AI-powered code editor (forked from VS Code) with integrated agent capabilities including file editing, terminal command execution, and multi-model support through Cursor's own API proxy. Cursor introduces a two-hop trust chain because developer data flows through both Cursor's infrastructure and the downstream model provider.

Codex

OpenAI's cloud-sandboxed coding agent that executes tasks in an isolated environment with a snapshot of the developer's repository. The sandbox limits destructive command blast radius, but the full repository contents (including any secrets checked into the repo) are transmitted to OpenAI's infrastructure.

GitHub Copilot

GitHub's AI coding assistant integrated into VS Code, JetBrains, and other IDEs. Copilot provides code suggestions and increasingly supports agent-mode capabilities including file edits and terminal access. Enterprise governance requires visibility into what context Copilot accesses and what actions it takes beyond code completion.

LLM (Large Language Model)

The model that interprets instructions and generates code or text. An LLM becomes much riskier when it is connected to tools and allowed to take action.

MCP (Model Context Protocol)

An open protocol that lets AI applications connect to tools, data sources, and services in a standard way. MCP turns tool access into a first-class part of the agent experience.

MCP server

A service that exposes tools or data to an agent through MCP. Examples might include code search, ticketing, databases, SaaS apps, or internal systems.

MCP action

A concrete operation the agent performs through an MCP connection, such as reading a record, updating a ticket, or calling an external system. These are exactly the actions governance needs to inspect.

Shadow MCP

MCP servers configured on developer machines without IT or security team knowledge or approval. Shadow MCP connections are configured locally in JSON files with no centralized registry, no approval workflow, and no security visibility. They inherit the permissions of the developer who installed them, meaning an unapproved MCP server wrapping a production database gives the agent the same access as the developer with no audit trail.

A2A (Agent-to-Agent)

Communication, protocols, or directories that let agents discover and talk to other agents. A2A expands the trust surface because instructions and authority can now flow between agents as well as from humans.

API (Application Programming Interface)

A structured way software systems exchange data or trigger actions. Agents often use APIs directly or through MCP-connected tools.

Agent rules

Local or organization-level instructions that shape how an agent behaves, what tools it can use, and what it should avoid. Good rules reduce risk; hidden or overly permissive rules can quietly expand it.

Sub-agent

A specialized helper agent given a narrower task by a primary agent. Sub-agents can improve speed or focus, but they also create more identities, permissions, and behaviors to govern.

RAG (Retrieval-Augmented Generation)

A pattern where an agent retrieves documents or data to ground its answer or action. It is helpful for context, but risky if the retrieved content is poisoned, misleading, or sensitive.

Context window

The active set of instructions, messages, files, and retrieved data the model can see during a session. Anything inside that window can influence behavior.

Governance & Control Language

Terms security and engineering teams use when they want safe deployment without killing productivity.

Auto-approve

A setting that lets an agent take certain actions without asking the user each time. It improves speed, but if scoped too broadly it turns small mistakes into fast incidents.

Agent autonomy levels

The spectrum of how much independence an AI coding agent has, from suggestion-only (the agent proposes, the human decides) to fully autonomous (the agent plans, decides, and executes without human confirmation). Most enterprise governance policies define different control requirements at each level: audit-only for low autonomy, human-in-the-loop approval for medium, and strict policy gates for high autonomy operations.

Audit mode

A policy setting where actions are observed and logged but not yet blocked. Teams often start here to understand real behavior before moving to warnings, approvals, or enforcement.

Allowlist / sanctioned tools

The explicit list of approved tools, servers, domains, or actions an agent is allowed to use. This is one of the simplest ways to keep risky or unknown connections out of the workflow.

Terminal governance

Policy enforcement over the shell commands that AI coding agents execute in the developer's terminal. Terminal governance evaluates commands at the point of execution and can audit, warn, block, or require human approval based on the command's risk level, target environment, and the agent's current permissions. This is critical because a single terminal command can alter databases, deploy infrastructure, or delete production resources.

Guardrails

The policy boundaries that limit what agents can see, touch, and do. Guardrails can warn, block, redact, or route actions to approval.

HITL (Human in the Loop)

A person reviews or approves high-impact operations before they execute. HITL is especially important for destructive, privileged, or externally visible actions.

IAM (Identity and Access Management)

The systems that govern who or what gets access to which resources. AASB works alongside IAM, but adds action-level governance inside agent workflows.

NHI (Non-Human Identity)

A machine identity such as a service account, API key, token, or agent identity. AI coding agents often operate with NHIs, which is why their permissions need tight control.

JIT access / ephemeral credentials

Just-in-Time access grants permissions only when needed, and ephemeral credentials expire quickly. Together they limit how long an agent or tool can keep sensitive access.

Least privilege

Give the minimum access needed to complete a task. This is a core rule for human users and even more important for agents that move quickly and act repeatedly.

Least agency

Give the minimum autonomy needed to complete a task. If an agent does not need to act on its own, it should not be configured to do so.

Policy as code

Defining security and governance policies in machine-readable, version-controlled formats rather than as manual procedures or wiki documentation. In AASB, policy as code means terminal command rules, MCP allowlists, data guardrails, and approval workflows are defined declaratively, reviewed in pull requests, and deployed consistently across the organization.

Policy engine / PEP / PDP

The logic that decides whether an action is allowed, denied, or routed for approval. The Policy Decision Point makes the decision; the Policy Enforcement Point is where the action gets intercepted.

Secret detection and redaction

The ability to identify and remove or mask sensitive credentials (API keys, tokens, connection strings, SSH keys, passwords) from agent context before it is transmitted to a model provider. In AASB, secret detection operates at the prompt level, catching credentials that agents read from environment files, configuration, or source code and preventing them from leaving the developer environment.

Zero trust

A model where nothing is trusted by default. Every identity, connection, and action must be verified based on current context, not assumption.

Data guardrails / DLP

Controls that prevent secrets, source code, PII, or regulated data from flowing to unauthorized tools, models, or destinations.

Audit trail / compliance evidence

The record of what the agent attempted, what policy applied, who approved it, and what happened. This evidence is critical for investigations, audits, and policy review.

Risk & Attack Language

The most important terms for understanding why AI coding agents create a new security and compliance problem.

Prompt injection

An attack that tricks an agent through malicious instructions. The model sees the text as relevant guidance even when it should ignore it.

Indirect prompt injection

Malicious instructions hidden in a web page, document, email, issue ticket, or tool output that the agent later reads. This is especially dangerous because the user may never notice it.

Tool misuse

When an agent uses a legitimate tool in an unsafe or unintended way. The tool itself can be real and approved, but the action can still be harmful.

Identity and privilege abuse

When an agent misuses inherited, cached, or overly broad access. This often happens when permissions are scoped for convenience instead of control.

Agentic supply chain vulnerability

Risk introduced by the live ecosystem around the agent, including MCP servers, plugins, registries, packages, descriptors, third-party agents, and update channels. Unlike static dependencies, these components may be loaded or trusted at runtime.

Unexpected code execution / RCE

When agent-generated or agent-invoked output becomes executable behavior that can compromise a machine, container, or connected system. Examples include unsafe shell commands, package installs, or chained tool calls.

Memory and context poisoning

When an attacker seeds or corrupts long-term memory, retrieved context, or shared knowledge so the agent behaves unsafely later. The danger is persistence: the bad input keeps influencing future decisions.

Insecure inter-agent communication

Weak authentication, integrity checks, or protocol handling when agents talk to each other. This can enable spoofing, replay, message tampering, or false trust relationships.

Cascading failure

A chain reaction where one bad output, poisoned input, or unsafe tool action spreads across multiple agents or systems. Agentic environments are vulnerable because they automate planning, delegation, and execution.

Human-agent trust exploitation

When users approve unsafe actions because the agent sounds confident, helpful, or authoritative. The problem is not only the model's output; it is the human tendency to over-trust it.

Rogue agent

An agent that drifts from its intended goal or approved scope and keeps acting in harmful ways. Individual actions may look normal while the overall behavior becomes deceptive or dangerous.

Shadow AI

Unapproved or unknown AI tools, agents, MCP servers, or settings running outside security oversight. Shadow AI creates blind spots, inconsistent policies, and surprise audit problems.

Data exfiltration

Unauthorized movement of source code, secrets, customer data, or business information out of approved systems. In agent workflows, this can happen through tools, prompts, logs, MCP calls, or chained actions.

Blast radius

The size of the damage if something goes wrong. Good governance tries to keep the blast radius small even when an agent fails or is abused.

Supporting Security & Operations Terms

How AASB fits beside existing security programs, standards, and engineering practices.

AppSec

Application security. AppSec helps find vulnerabilities in code and release pipelines. It is necessary, but it does not by itself govern what an agent is doing live inside the developer workflow.

EDR

Endpoint Detection and Response. EDR monitors endpoints for malicious behavior, but it can miss agent misuse when the agent relies on trusted binaries and valid credentials rather than classic malware.

SBOM and AIBOM

A Software Bill of Materials lists software components and dependencies. An AI Bill of Materials extends the idea to AI systems, models, prompts, and related artifacts to improve visibility and provenance.

PKI and mTLS

Public Key Infrastructure and mutual TLS are ways to establish trusted, authenticated communication between systems. In agentic environments, they help secure inter-agent and tool connections.

Sandbox

An isolated environment where code execution or tool actions can be contained. Sandboxing reduces blast radius, but it does not replace governance over what the agent is trying to do.

Code sandboxing

Running AI-generated or AI-executed code inside an isolated environment (container, VM, or cloud sandbox) that limits its access to the host filesystem, network, and credentials. Code sandboxing is a complementary control to AASB: sandboxing contains the blast radius of a bad action, while AASB governs whether the action should happen at all. Tools like Codex use cloud sandboxes by default; CLI agents like Claude Code do not.

Provenance

Proof of where a tool, prompt, model, package, or descriptor came from and whether it was altered. Strong provenance helps teams trust the right components and reject the wrong ones.

Secrets

Sensitive credentials like API keys, tokens, SSH keys, and connection strings. Agents should never have broad, persistent access to secrets without tight controls.

DX (Developer Experience)

The day-to-day feel of using the toolchain. Good AI governance should improve trust and adoption without making secure paths slower than insecure ones.

Vibe coding

Informal, fast, heavily agent-assisted coding where a user delegates large parts of the implementation. Useful for speed, but risky if execution, permissions, and tool use are left ungoverned.

Where Unbound Fits

The Agent Access Security Broker for AI coding

Unbound is building the AASB category around the control surfaces that matter most: discovery, visibility, and policy over the tools, connectors, and actions AI coding agents use every day.

Discover

The real agent estate across AI coding tools, MCP servers, rules, and risky configurations.

Assess

Posture before incidents by surfacing unsafe autonomy, broad permissions, and unsanctioned connections.

Enforce

Guardrails through audit, warn, block, and approval-based workflows without forcing developers off their tools.

The goal is not to slow down engineering. The goal is to make AI coding adoption governable, auditable, and safe at enterprise scale.

Turn the glossary into action

Start with free visibility into the AI coding agents, MCP servers, and risky configurations already in use. Or book a demo to see how Unbound governs terminal commands, tool use, approvals, and data guardrails in live developer workflows.