All posts

Why Traditional AppSec Tools Fail Against MCP-Based Architectures

Julie Peterson
Product Marketing
Table of Contents

Shortly after GenAI upended the way we thought about everyday computing, security professionals, as they do, began to worry about what this would mean for securing the enterprise. Early versions of GenAI lacked adequate guardrails to protect against new attacks and so the question that naturally followed was “will current AppSec tools provide enough protection?” With the introduction of Agentic AI and Model Context Protocol (MCP), the question of whether existing tools can help in this new world order is still a concern of anyone dealing with budget or manpower constraints.

While Application Security (AppSec) is still an essential practice in the enterprise, the short answer to the question of whether current AppSec tools provide protection is “not completely.” Traditional AppSec tools remain necessary, but they are insufficient on their own. 

Traditional AppSec tools do not secure MCP-based, agent-driven architectures because they do not address the dominant risk categories unique to MCP-based agent architectures. They were not designed to reason about model intent, conversational context, or autonomous tool orchestration.

Traditional AppSec tools were developed to address a different set of problems. They were designed for a deterministic human-driven API world, whereas MCP operates on autonomous, AI-driven, and context-heavy workflows. 

MCP introduces a new attack surface category, not a replacement one.

In this blog, we look at how traditional AppSec tools like WAFs and API gateways fail because they cannot interpret the semantic intent behind a prompt, analyze an entire conversation for context, or manage model-mediated privilege changes. 

Another security paradigm shift

The introduction of cloud computing in the early 2000s forced the industry to rethink cybersecurity. With the cloud, cybersecurity moved from a perimeter, hardware-focused model to a decentralized, data-centric approach. Infrastructure became programmable and responsibility was split across providers and customers. To secure the cloud, not only did we need to shift how we thought about cybersecurity, but we also needed new tools to protect the enterprise.

GenAI has forced a similar paradigm shift. The “application” now includes AI models and agents that are stochastic in nature, not deterministic. Instead of explicit logic, we now have to contend with model behavior. Dealing with systems that are goal-directed and autonomous requires additional expertise compared with a traditional application.

MCP further increases the complexity of this paradigm shift. When your architecture uses MCP to connect agents to tools and data, you need security controls designed for agent behavior, tool ecosystems, and fast changing execution graphs. Unfortunately, traditional AppSec tools were built to secure code, builds, dependencies, and deployed services. They are not able to secure AI agents that plan, call tools, and mutate state over time.

Securing AI requires a new approach to cybersecurity along with purpose-built tools.

MCP architectures change the security equation

If you are building with Model Context Protocol based agents, you probably are already seeing changes firsthand.

What used to be a neat request-response application now behaves more like a fast-moving operator. It reads, decides, calls tools, reads again, decides again, and then acts. As a result, the shape of risk changes with it.

The architecture pattern

MCP turns tool use into a formal integration layer. Agents can discover tools, select them, call them, and interpret their results in a loop.

That loop expands your attack surface. It is no longer simply your application code and APIs. The loop contains:

  • Your code
  • Every tool the agent can reach
  • The broker layer that exposes those tools
  • The content returned by those tools
  • The logic that decides what to do next

The agent is not calling a single endpoint. It is navigating a tool graph. It is dealing with much more complexity and autonomy.

Why this is not just APIs with a chatbot in front

In a traditional application, control flow is fixed. A request hits an endpoint, business logic runs, a response is returned.

In an MCP-driven agent, control flow now becomes iterative. It looks something like this:

  1. Plan
  2. Call tool
  3. Observe output
  4. Update plan
  5. Repeat

Inputs are no longer limited to user form fields. They include prompts, retrieved documents, tool outputs, and intermediate reasoning artifacts.

Outputs are no longer only text. They are actions:

  • Tickets created
  • Configurations changed
  • Emails sent
  • Data exported
  • Code merged

You are no longer securing a text generator. You are securing a decision loop that can modify real systems.

Why MCP and agentic AI need different defenses

AI applications are different from traditional applications. Yes, traditional AppSec approaches still apply, but you also need additional coverage for new risks specific to AI models, agents, and MCP servers.

Autonomous agents create new risk

Agents behave like semi-autonomous users with high speed and wide reach.

They can chain capabilities. A read-only reporting tool combined with a write capable messaging tool can quietly become a data exfiltration pipeline. Neither tool is dangerous in isolation. Together, these capabilities can be disastrous.

The security implication is clear. You now must govern sequences of actions, not just individual endpoints.

Lack of context awareness and reliable state

An AI model does not manage state the way an application does. It approximates context from whatever you pass into its window plus recent tool outputs. It can:

  • Treat stale logs as current truth
  • Accept partial data as complete
  • Interpret adversarial content as authoritative

The security implication here is that state management, provenance, and guardrails must be explicit system responsibilities. The model does not reliably supply them on its own.

Agent-specific attacks not seen in traditional AppSec

Not surprisingly, MCP ecosystems introduce new attack types that do not map to traditional threats. Prompt injection, tool poisoning, rug pulls, and more are all examples of new attack types.

  • Prompt injection and jailbreaks: An LLM is manipulated to create a malicious MCP tool call.
  • Rogue MCP servers: Agents may attempt to connect with untrusted, unapproved, or illegitimate MCP servers, creating downstream risk.
  • Tool poisoning: An attacker embeds tools with malicious content or payloads, indirectly enabling the agent or LLM to perform unsafe actions.
  • Rug pulls: A tool appears benign during onboarding or testing. Later, its behavior, ownership, permissions, or outputs change.
  • Data exfiltration: Tools can be used to exfiltrate sensitive data, PII, and IP, flowing through MCP tools.
  • Tool updates and drift: MCP servers can add, remove, or modify their resources, creating blind spots.

The impact on security is that tools and tool outputs must be treated as untrusted inputs with their own supply chain and runtime risk.

Identity and Authorization Gaps

Many agents run under coarse service identities, shared tokens, or developer keys, where attribution becomes murky. When an agent takes action on behalf of multiple users, “who did this” is no longer obvious.

To compound the problem, teams often grant broad access in case an agent needs it, throwing least privilege out the window. From a security perspective, this creates an urgent need for identity that supports delegation, scoped access, per-step authorization, and strong attribution.

Where traditional AppSec tools fall short

Traditional AppSec practices are essential to securing the enterprise. It would be foolish to discontinue any of these practices simply because they don’t address the new risks associated with agentic AI and MCP. That being said, they are limited in their effectiveness in securing MCP workflows. The following section provides an overview of traditional AppSec tools, how they help, and where they fall short when securing MCP.

SAST and code scanning

Static application security testing (SAST) analyzes pre-production code to find bugs in your source code, byte code, and binaries. It does not evaluate unsafe plans formed at runtime.

SAST provides no runtime coverage and cannot reason about:

  • Dynamic tool selection
  • Prompt injection through retrieved content
  • Multi step action chains

The failure mode in an MCP system is rarely a bad line of code. It is a valid sequence of calls that should never have been chained together.

DAST and API testing

Dynamic application security testing (DAST) looks for vulnerabilities and weaknesses by simulating external attacks on an application when the application is running. Dynamic testing assumes predictable endpoints and fixed inputs.

API security focuses on protecting application programming interfaces from abuse, unauthorized access, and data exposure. It evaluates authentication, authorization, input validation, and common risks such as broken object level authorization and excessive data exposure across defined endpoints.

Both DAST and API security assume relatively deterministic behavior. Endpoints are known. Requests follow expected patterns. Inputs and outputs can be evaluated in isolation.

Agent behavior is probabilistic and path dependent. Two identical prompts can result in different tool chains.

DAST and traditional API testing do not naturally test sequence risk, such as:

  • Read a sensitive document
  • Summarize the document
  • Post the summary to a public channel

Each step looks valid. Authentication succeeds. Authorization checks pass. The API responses are correct. DAST and API scanners have no way to determine that the chain itself represents an unacceptable risk to the enterprise.

In MCP and agentic AI systems, risk does not always live inside a single request. It emerges from the combination of multiple valid actions across tools and data sources. That is where traditional dynamic testing and API security controls start to lose visibility.

SCA and dependency scanning

Software composition analysis (SCA) scans an application’s code base for open source dependencies to identify all open source packages, their license compliance data, and any known security vulnerabilities.

SCA does not meaningfully address MCP risk because MCP tools often live outside your repository. They are discovered and invoked at runtime through registries or remote catalogs, not declared in a requirements file or committed to source control. This means MCP tools may never appear in the dependency graph analyzed by SCA.

A malicious or compromised tool is closer to a dynamically loaded plugin than a traditional package. Risk can stem from manipulated metadata, swapped endpoints, revoked publisher trust, or a tool version that changes behavior without a code commit. Traditional SCA does not track external tool catalogs, validate publisher identity, or monitor trust and integrity signals in this execution layer.

Secrets scanning

Secrets scanners are automated security tools that detect exposed sensitive information like API keys, tokens, passwords, and encryption keys hidden within source code, configuration files, and documentation. While secrets scanners are strong at detecting keys in source code, they are weak at identifying when an agent leaks secrets through summarization, copy and paste actions, or tool-mediated exports.

WAFs, RASP, and perimeter controls

Web application firewalls (WAFs) filter, monitor, and block HTTP traffic to and from web applications, protecting against a number of application layer attacks. Runtime application self protection (RASP) runs inside an application to detect and automatically block attacks in real time by monitoring and controlling execution. Perimeter controls inspect web traffic at the edge.

Again, these tools have their place in securing traditional applications. When it comes to MCP, however, tool calls may bypass that edge entirely through internal networks or direct SaaS APIs.

WAFs, RASP, and perimeter controls validate requests, not intent. They do not understand whether a tool chain makes sense in context.

SIEM and logging as a backstop

You can log tool calls. Without agent aware context, however, how do you know whether a sequence was expected or anomalous?

SIEMs collect, analyze, and correlate data from across an entire IT infrastructure in real time and are essential for Security Operations Centers (SOCs) to detect, investigate, and respond to threats.

For MCP, investigations stall without prompt lineage, document provenance, tool result traceability, or decision graph visibility. Without provenance, you have no way of knowing which prompt, which retrieved document, or which tool output influenced the decision. Logs without agent aware semantic context are insufficient for reliable detection and investigation in MCP environments. 

Again, it’s not that logs are useless. In fact, they are essential for many aspects of the business. It’s just that they are incomplete telemetry for agentic systems.

The challenges of MCP-based architectures

When applying an AppSec lens to MCP-based architectures, you have to remember that MCP is not simply another integration. Agentic AI and MCP turn your application into a fast-thinking, autonomous operator with reach across a wide range of systems.

Traditional AppSec still matters. Absolutely 100%. Without governance of the agent loop itself, however, you have no visibility into where the real decisions - and risks - are happening.

This blog is the first in a two-part series. The second blog in the series will identify what a purpose-built solution looks like and how to roll out a comprehensive MCP security plan.

About TrojAI

TrojAI's mission is to enable the secure rollout of AI in the enterprise. TrojAI delivers a comprehensive security platform for AI. This best-in-class platform empowers enterprises to safeguard AI models, applications, and agents both at build time and run time. 

Our solutions include:

  • TrojAI Detect - automatically red team AI models, safeguard model behavior, and deliver remediation guidance at build time. 
  • TrojAI Defend - this AI application and agent firewall protects enterprises from real-time threats at run time. 
  • TrojAI Defend for MCP - monitor and protect agentic AI and MCP workflows. 

For more information, visit www.troj.ai.