What is OWASP?
The Open Worldwide Application Security Project (OWASP) is a non-profit organization that offers guidance on how to improve software security. The OWASP mission is to raise security awareness among developers, designers, architects, and security practitioners while helping them defend against real-world threats. OWASP provides resources and tools to help organizations identify, remediate, and prevent security vulnerabilities in software and web applications. It is best known for its Top 10 lists, which identify the most critical cybersecurity threats that organizations face today.
OWASP maintains and updates several different Top 10 lists, including its original Top 10 for web applications and a newer list focused on large language model (LLM) applications and Generative AI (GenAI).
OWASP frameworks are among the most widely referenced security standards in use today.
What is the OWASP Top 10 for LLMs?
OWASP launched its first version of the OWASP Top 10 for LLMs in 2023. It identifies threats, provides examples of vulnerabilities and attacks, and offers mitigation strategies specific to LLM and GenAI applications.
This list is designed to help technology and business leaders, developers, data scientists, and security experts who develop, build, or manage the security risks associated with LLM and GenAI applications.
Since the inception of the OWASP Top 10 for LLMs, updates have been released more quickly than with other OWASP lists. AI and LLM technology is evolving rapidly, creating new security risks and attack vectors at a fast pace. To keep up, frequent updates are needed to address these emerging threats.
OWASP revised its list in November 2025. This blog outlines the major changes to the Top 10.
The new 2025 OWASP Top 10 for LLMs
The 2025 OWASP Top 10 for LLMs has undergone a number of significant changes. Two new categories – system prompt leakage, vector and embedding weaknesses – have made the list. Five of the categories have been renamed and expanded to more accurately represent current threats. The entire list has been reprioritized, and two previous categories were removed from the list.
The following table shows updates to the 2025 OWASP Top 10 for LLMs.
LLM01: Prompt injection
Prompt injection maintains its position as the top threat on the OWASP Top 10 for LLMs. A prompt injection attack occurs when inputs to the application alter an LLM’s behavior or expected output. For 2025, OWASP has updated the definition to include both direct and indirect prompt injections. Direct prompt injection is when user inputs directly alter a model’s behavior. Indirect prompt injection is when an LLM accepts inputs from an external source, like a website, that result in unexpected outcomes.
Prompt injection remains a critical vulnerability because of the severity of the risk. A successful attack can result in disclosure of sensitive data or AI system infrastructure, unauthorized access, the manipulation of the model’s decision-making process, and more.
LLM02: Sensitive information disclosure
Sensitive information disclosure had the biggest jump of any category on the list, leaping from #6 on the previous list to #2 this year. Risks here include the disclosure of sensitive data like PII, IP, and even the exposure of proprietary algorithms. This category emphasizes the importance of preventing user data from entering the training model through data sanitization, as well as applying strict input validation to filter out sensitive data.
LLM03: Supply chain
Renamed from supply chain vulnerabilities, this category jumped two spots and reflects that LLMs are also susceptible to supply chain risk that can impact model and application behavior. LLM supply chains are vulnerable to a range of risks that can compromise training data, models, and deployment platforms, potentially leading to biased outputs, security breaches, or system failures.
LLM04: Data and model poisoning
Data and model poisoning dropped one spot and was previously named training data poisoning. Data and model poisoning happens when the data used for pre-training, fine-tuning, or embedding is altered to introduce vulnerabilities, backdoors, or biases. This can compromise model behavior and security, resulting in harmful outputs or even the impaired functioning of the model.
LLM05: Improper output handling
Improper output handling, formerly called insecure output handling, dropped several spaces from the previous OWASP Top 10 for LLMs. It refers to the outputs generated by LLMs that are not properly validated or sanitized before being passed downstream to other components or systems. Improper output handling can result in attacks such as XSS, privilege escalation, or remote code execution on backend systems.
LLM06: Excessive agency
As agentic architectures become more prevalent, LLMs are being granted greater autonomy. Excessive agency, which moved up one spot, has been expanded to address the associated risks. While agentic architectures enable GenAI to act more autonomously, reduced human oversight raises the potential for unintended consequences, requiring greater scrutiny and oversight.
LLM07: System prompt leakage
System prompt leakage is a new category for the 2025 OWASP Top 10. It refers to system prompts that inadvertently contain sensitive information that could be used to facilitate an attack. The security risk lies with prompts exposing underlying elements such as disclosing sensitive information or information about system guardrails.
LLM08: Vector and embedding weaknesses
The second new category on the 2025 list, vector and embedding weaknesses, focuses on vulnerabilities in Retrieval-Augmented Generation (RAG) and embedding-based methods. Weaknesses in how vectors and embeddings are generated, stored, or retrieved can be exploited, leading to unauthorized access, data leakage, data poisoning, and more.
LLM09: Misinformation
Misinformation, renamed and expanded from the former overreliance category, holds steady in the ninth position on the 2025 list. Misinformation refers to LLMs producing false or misleading information that appears credible. The risk occurs when factual inaccuracies, unsupported claims, or misrepresentations of expertise are used to make critical decisions without careful consideration.
LLM10: Unbounded consumption
Dropping six spots from last year, unbounded consumption has been renamed and expanded from its previous incarnation as denial of service. Unbounded consumption is when excessive or uncontrolled inferences to a model lead to such things as denial of service, financial losses, or service degradation. The expansion of this category reflects the increased importance of resource management and unexpected operating costs associated with hosting models, especially in the cloud.
The future of the OWASP Top 10
Attacks against GenAI models and applications are increasing as a direct result of their growing ubiquity. Securing these models and applications must be a critical focus for any organization deploying them.
The OWASP Top 10 for LLMs will continue to evolve alongside the rapid advancements in AI technologies and their increasing deployment across industries. As AI systems become more powerful and autonomous, the security risks and vulnerabilities associated with them will also grow in complexity. This will require organizations to adopt more sophisticated measures to protect their deployments.
As AI technology advances, so too will the strategies needed to defend it. The good news is that more and more organizations are adhering to security standards like those outlined in the OWASP Top 10 for LLMs to help secure their AI models and applications.
How TrojAI can help
Our mission at TrojAI is to enable the secure rollout of AI in the enterprise. We are a comprehensive AI security platform that protects AI/ML applications and infrastructure. Our best-in-class platform empowers enterprises to safeguard AI applications and models both at build time and run time. TrojAI Detect red teams AI models, safeguarding model behavior and delivering remediation guidance at build time. TrojAI Defend is a firewall that protects enterprises from real-time threats at run time.
By assessing the risk of AI model behaviors during the model development lifecycle as well as protecting model behavior at runtime, we deliver comprehensive security for your AI models and applications.
Want to learn more about how TrojAI secures the largest enterprises globally with a highly scalable, performant, and extensible solution?
Visit us at troj.ai now.