OWASP Top 10 for LLM Applications 2025

OWASP Top 10 for LLM Applications 2025

Purpose

The OWASP Top 10 is a widely recognized document outlining the most critical security risks facing web applications. Specifically focusing on LLM (Large Language Model) applications, these risks encompass vulnerabilities that could compromise the integrity, availability, and confidentiality of data processed by such models. This page outlines Unique’s understanding and approach on how to mitigate these risks to ensure the security and trustworthiness of our services.

All answers refer to Version 2025 of the OWASP Top 10 for LLM Applications

Unique’s approach to OWASP Top 10 for LLM Applications

LLM01: Prompt injections

Prompt Injection Vulnerabilities in LLMs involve crafty inputs leading to undetected manipulations. These inputs can affect the model even if they are imperceptible to humans. The impact ranges from data exposure to unauthorized actions, serving attackers' goals.

Prevention and Mitigation Strategies

  • The platform is only available to registered internal users, which reduces the attack surface.

  • Unique has an AI Governance Frameworkin place, which helps manage risks related to Privacy and Security, Accountability, Transparency, Explainability, Reliability and Safety. This includes user feedback on responses and a human-in-the-loop concept.

  • Prompts are logged in an unalterable audit log. This allows for tracing back any prompt to the user who executed it.

  • User prompts are sanitized and output is escaped according to best practices.

  • We run regular Benchmarking questions to ensure the output of the LLM is in line with expectations. This involves establishing appropriate risk thresholds and providing the flexibility to disable specific use cases or functions if benchmarking results fall below predefined thresholds.

  • Unique offers an API that can be integrated with existing Data Leakage Prevention (DLP) systems to monitor the data being processed. The DLP system scans the prompts during post-chat analysis, ensuring that sensitive information is not inadvertently exposed during interactions

  • Unique runs a managed Bug Bounty Program with external researchers to continuously harden our solution against crafted inputs.

LLM02: Sensitive Information Disclosure

LLM applications risk exposing sensitive data, proprietary algorithms, or confidential details through their output. This can result in unauthorized access to sensitive data, intellectual property breaches, and privacy violations.

Prevention and Mitigation Strategies

  • The platform is only available to registered internal users, which reduces the attack surface.

  • User queries are sanitized in the prompts and output is escaped according to best practices.

  • Unique has technical and contractual safeguards in place to make sure that no data is used for any model training (if not explicitly agreed with the client).

  • Unique is working closely with Microsoft and only uses official Microsoft Azure OpenAI LLMs which are pre-trained and never use any data for model training.

  • Prompt context is only extended by the data the user that is prompting already has access to.

  • Unique runs a managed bug bounty program with external researchers to continuously harden our solution against crafted inputs.

LLM03: Supply Chain

LLM supply chains are susceptible to various vulnerabilities, which can affect the integrity of training data, models, and deployment platforms. These risks can result in biased outputs, security breaches, or system failures.

Prevention and Mitigation Strategies

  • Unique is working closely with Microsoft and only uses official Microsoft Azure OpenAI LLMs which are pre-trained and never use any data for model training.

  • Unique has a Secure Software Development Lifecycle in place that includes regular scanning for outdated or vulnerable dependencies, vulnerabilities in the software or configuration mistakes.

  • We run regular Benchmarking questions to ensure the output of the LLM is in line with expectations. This involves establishing appropriate risk thresholds and providing the flexibility to disable specific use cases or functions if benchmarking results fall below predefined thresholds.

  • Unique maintains an up-to-date inventory of components to ensure we have an accurate and signed inventory, preventing tampering with deployed packages.

  • We implement strict monitoring and auditing practices to prevent and quickly detect any abuse

LLM04: Training Data Poisoning

Data poisoning occurs when pre-training, fine-tuning, or embedding data is manipulated to introduce vulnerabilities, backdoors, or biases. This manipulation can compromise model security, performance, or ethical behavior, leading to harmful outputs or impaired capabilities.

Prevention and Mitigation Strategies

  • Unique has technical and contractual safeguards in place to make sure that no data is used for any model training (if not explicitly agreed with the client).

  • Unique is working closely with Microsoft and only uses official Microsoft Azure OpenAI LLMs which are pre-trained and never use any data for model training.

  • Unique has an AI Governance Framework in place, which helps in managing risks related to Privacy and Security, Accountability, Transparency, Explainability, Reliability and Safety. This includes user feedback on responses and a human-in-the-loop concept.

  • We run regular Benchmarking questions to ensure the output of the LLM is in line with expectations. This involves establishing appropriate risk thresholds and providing the flexibility to disable specific use cases or functions if benchmarking results fall below predefined thresholds.

LLM05: Improper Output Handling

Improper Output Handling refers to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Successful exploitation can result in XSS and CSRF in web browsers as well as SSRF, privilege escalation, or remote code execution on backend systems.

Prevention and Mitigation Strategies

  • The platform is only available to registered internal users, which reduces the attack surface.

  • User queries are sanitized in the prompts and output is escaped according to best practices.

  • Unique implements context-aware output encoding based on where the LLM output will be used.

  • We employ strict Content Security Policies (CSP) to mitigate the risk of XSS attacks from LLM-generated content.

  • Unique runs a managed Bug Bounty Program with external researchers to continuously harden our solution against crafted inputs.

LLM06: Excessive Agency

An LLM-based system is often granted a degree of agency through extensions, tools, skills, or plugins to undertake actions in response to prompts. In agent-based systems, the model makes repeated calls to LLMs using output from previous invocations to ground and direct subsequent ones.

Prevention and Mitigation Strategies:

  • Minimize extensions that LLM agents can call to only those necessary.

  • Limit extension functionality to only what's required (e.g., for email access, use read-only extensions if that satisfies requirements).

  • Avoid open-ended extensions (e.g., general shell access) and use more granular functionality.

  • Minimize extension permissions to downstream systems.

  • Execute extensions in user's context with minimum privileges - Require human approval for high-impact actions.

  • Implement complete mediation in downstream systems rather than relying on LLM to decide if actions are allowed.

  • Unique is a knowledge management system that gives its users access to knowledge. It does neither perform actions on behalf of users nor does it make use of LLM plugins to do so.

  • Unique has an AI Governance Framework in place, which helps in managing risks related to Privacy and Security, Accountability, Transparency, Explainability, Reliability and Safety. This includes user feedback on responses and a human-in-the-loop concept.

LLM07: System Prompt Leakage

The system prompt leakage vulnerability in LLMs refers to the risk that the system prompts or instructions used to steer the behavior of the model can also contain sensitive information that was not intended to be discovered. When discovered, this information can be used to facilitate other attacks.

Prevention and Mitigation Strategies

  • Unique separates sensitive data from system prompts, never including API keys, authentication information, or sensitive configuration details within prompts.

  • We implement a system of guardrails outside of the LLM itself to enforce security policies.

  • Unique ensures that critical security controls are enforced independently from the LLM, never delegating authorization or access control decisions to the model.

  • Our system monitoring and logging mechanisms can detect attempts to extract system prompts.

LLM08: Vector and Embedding Weaknesses

Vectors and embeddings vulnerabilities present significant security risks in systems utilizing Retrieval Augmented Generation (RAG) with Large Language Models (LLMs). Weaknesses in how vectors and embeddings are generated, stored, or retrieved can be exploited to inject harmful content, manipulate model outputs, or access sensitive information.

Prevention and Mitigation Strategies

  • Unique implements fine-grained access controls and permission-aware vector and embedding stores.

  • We ensure strict logical and access partitioning of datasets in the vector database to prevent unauthorized access between different classes of users or different groups.

  • Our data validation pipelines robustly validate knowledge sources and regularly audit the integrity of the knowledge base.

  • Unique maintains comprehensive logging of retrieval activities to detect and respond to suspicious behavior.

LLM09: Misinformation

Misinformation from LLMs poses a core vulnerability for applications relying on these models. Misinformation occurs when LLMs produce false or misleading information that appears credible. This vulnerability can lead to security breaches, reputational damage, and legal liability.

Prevention and Mitigation Strategies

  • Unique offers fast and easy fact-checking via the "source function" to verify the source of information provided by LLMs before utilizing it for decision-making, information dissemination, or other critical functions. When an output is generated, the respective sources including page numbers are highlighted making it easy for the user to check for factual correctness.

  • Unique has an AI Governance framework in place, which helps in managing risks related to Privacy and Security, Accountability, Transparency, Explainability, Reliability and Safety. This includes user feedback on responses and a human-in-the-loop concept.

  • We run regular Benchmarking questions to ensure the output of the LLM is in line with expectations. This involves establishing appropriate risk thresholds and providing the flexibility to disable specific use cases or functions if benchmarking results fall below predefined thresholds.

  • Unique uses Retrieval-Augmented Generation to enhance the reliability of model outputs by retrieving relevant and verified information from trusted external databases

  • Unique has implemented a built-in Hallucination Evaluation in a RAG-Based Setting that evaluates how closely generated responses align with retrieved content. This provides users with a clear indication of trustworthiness, supports better decision-making in high-accuracy contexts, and enables continuous improvement through feedback mechanisms.

LLM10: Unbounded Consumption

Unbounded Consumption occurs when a Large Language Model (LLM) application allows users to conduct excessive and uncontrolled inferences, leading to risks such as denial of service (DoS), economic losses, model theft, and service degradation. The high computational demands of LLMs make them vulnerable to resource exploitation.

Prevention and Mitigation Strategies

  • The platform is only available to registered internal users, which reduces the attack surface. It can additionally be locked down using IP blocking so that only internal IPs of the customer can access the system.

  • Azure DDoS protection can be configured with additional costs for the customer.

  • User prompts are sanitized and output is escaped according to best practices.

  • Unique rate-limits authentication APIs to block repeated or automated calls and significantly slow down attempts to brute-force logins or token creation.

  • We implement input validation, monitoring, and resource allocation management to prevent any single user or request from consuming excessive resources.

  • Unique sets timeouts and throttles processing for resource-intensive operations.

  • Our system is designed to degrade gracefully under heavy load, maintaining partial functionality rather than complete failure.

Conclusion

Unique is committed to maintaining the highest security standards in our LLM applications. By proactively addressing the OWASP Top 10 for LLM Applications 2025, we ensure that our systems remain secure, reliable, and trustworthy. Our human-in-the-loop approach serves as a critical foundation across all security domains, complemented by technical safeguards, governance frameworks, and continuous monitoring to protect against emerging threats in the rapidly evolving field of large language models.

 


Author

@Sina Wulfmeyer,@Michael Dreher& @Daylan Araz

Date

May 27, 2025

© 2025 Unique AG. All rights reserved. Privacy PolicyTerms of Service