Securing AI: What the OWASP LLM Top 10 Gets Right – and What It Misses

Does the OWASP LLM Top 10 offer enough to address emerging security risks for AI systems? Here’s what it covers – and doesn’t.

Klaus Haller, Freelance Contributor

November 26, 2024

6 Min Read
The OWASP LLM Top 10 aims to secure large language models
Flirting with vulnerabilities: The OWASP LLM Top 10 aims to secure large language modelsImage: Alamy

Securing AI systems is a pressing concern for CIOs and CISOs due to AI and LLMs’ increasingly vital role in businesses. Thus, they instinctively turn to Open Web Application Security Project (OWASP) for guidance.

OWASP is known for its Top 10 list of web application security flaws. Over the last years, the organization has expanded its focus and nowadays publish a bouquet of ‘Top 10’ lists for various security topics, including one for large language models (LLMs). But what does this list cover? Is the threat guidance comprehensive?

Before deep-diving into the OWASP LLM Top 10 list, a change of perspective can be an eye-opener for security professionals. Suppose you are a cybercriminal: why would you attack an LLM?

The Attacker Mindset

Malicious hacking is not an academic endeavor. It is a business. Cybercriminals attack not what is theoretically possible but which promises a quick financial return. So, what is the business case for manipulating AI models and LLMs to spread misinformation? In most cases, other attacks are financially more rewarding, such as:

  • Cryptomining: Misusing the computing power of compromised AI estates to mine cryptocurrencies – super convenient to cash in.

  • Blackmail with sensitive data after stealing, e.g., patient details, customer information, or business secrets, and demanding a ransom for not leaking it.

  • Distributed Denial-of-Service (DDoS) Attacks, i.e., bombarding business-critical systems with requests to bring them down, often for demanding ransom or during a political disinformation campaign.

Related:How Insecure Network Devices Can Expose Data Centers to Attack

 More advanced attack forms requiring more effort, know-how, and resources are:

  • Credential Theft: Stealing credentials to move through an organization’s systems (lateral movement) to gain access to more valuable data. When credentials relate to SaaS services such as ChatGPT, reselling them in the Darknet is also an option.

  • Triggering Financially Beneficial Actions: Manipulating AI systems to perform unauthorized actions like financial transactions – obviously a pretty sophisticated, high-effort attack.

OWASP LLM Top 10: AI Security Risks

When looking at the OWASP LLM Top 10, five out of the 10 risks relate to manipulating or attacking the AI model itself:

  • Prompt Injection (LLM01): Hackers manipulate AI systems by submitting requests, aka prompts, to the LLM so that it behaves outside its intended use and generates harmful or inappropriate outputs.

  • Training Data Poisoning (LLM-03): Malicious actors corrupt training data, reducing the quality of AI models. The risk is relevant for publicly available community training data, not so much for internal data. The latter is similar to pre-AI fraud or sabotage risks for database

  • Model Denial-of-Service (LLM04): Overload AI components with requests to impact their stability and availability, affecting business applications that rely on them.

  • Sensitive Information Disclosure (LLM-07): Exploiting LLMs to release confidential data due to unscrubbed input data ending in an LLM containing sensitive information or missing filtering of unwanted requests. LLMs miss stringent fine-granular access control known from databases and file systems.

  • Model Theft (LLM10): Hackers might probe systems to understand how they function, which could lead to intellectual property theft.

  • Overreliance on AI (LLM-09): Blind trust in AI outputs can lead to wrong decisions, e.g., when LLMs “hallucinate” and fabricate information. It is a pure business risk, not related to IT.

Related:Evolving Ransomware Threats: Why Offline Storage is Essential for Modern Data Protection

All these risks listed in the LLM Top 10 exist, though attackers might struggle to monetize successful attacks in many scenarios. Organizations can mitigate such risk only on a per-application or per-model level, e.g., by pen-testing them periodically.

OWASP-LLM-Top-10.png

LLM Interaction Challenges

Business benefits come with a tight integration of AI and LLMs into business processes. The technical coupling of LLMs and other systems introduces technical security risks beyond the introduced model-related issues. These risks count for four of the LLM Top 10:

Related:Managing Risk: Is Your Data Center Insurance up to the Test?

  • Insecure output handling (LLM-02) warns against feeding the output directly to other systems without cleaning the output against, e.g., hidden attacks and malicious activities.

  • Excessive agency (LLM-08) relates to LLMs having more access rights than necessary, e.g., to access and send emails, enabling successful attackers to trigger undesired actions in other systems (e.g., deletion of emails).

  • Permission issues (LLM-06) relate to unclear authentication and authorization checks. The LLMs or their plugins might make assumptions regarding users and roles that are not guaranteed by other components.

  • Insecure plugin design (LLM-10) points out the risk when APIs do not rely on concrete, type-checked parameters but accept free text, which might result in malicious behavior when processing the request.

All these risks relate to API hygiene and missing security-by-design, which larger organizations might address with penetration testing and security assurance measures.

While exploitation requires high investments today, this would change when LLM services develop towards ecosystems with widespread 3rd-party plugins.

Suddenly, cybercriminals could see the chance for mass attacks on vulnerabilities of popular plugins or exploiting frequent misconfigurations. Professional vulnerability management would also be a must in the LLM context.

AI Tooling Risks

While the public focuses on LLM attacks, the AI infrastructure for training and running them might present a more significant risk, even when companies rely on SaaS or widely used AI frameworks.

Issues with two (open source) AI frameworks, the ShadowRay vulnerability (CVE-2023-48022) and ‘Probllama’ (CVE-2024-37032), are recent examples.

Probllama affects Ollama, a platform for deploying and running LLMs, where poor input validation enables attackers to overwrite files, potentially leading to remote code execution.

hadowRay allows attackers to submit tasks without authentication – an open invitation for exploitation. Indeed, network zoning and firewalls help, though (somehow frightening) they are not always in place. So, these two examples illustrate how quickly AI tooling and framework vulnerabilities become invitations for cyber attackers.

Read more of the latest data center security news

Equally concerning is every tech company CISO’s triumvirate of SaaS hell: Slack, Hugging Face, and GitHub (and their lookalikes). These tools boost team collaboration and productivity and help manage code, training data, and AI models.

However, misconfigurations and operational mistakes can expose sensitive data or access tokens on the web. Due to their widespread use, these tools are more appealing targets for cybercriminals than individual LLM attacks.

However, there is also good news: Organizations can mitigate many AI tooling-related risks by standardizing and centralizing these services to ensure proper security hardening and quick responses when vulnerabilities emerge.

Generic IT Layer

It might surprise many AI and security professionals that commodity IT services, like compute and storage, including database-as-a-service, are often more effortless to exploit than the AI.

Misconfigured object storage with training data or as part of RAG architectures enables attackers to steal data for ransom. Access to computing resources (or when stealing credentials for cloud estates) paves the way for cybercriminals to spin up virtual machines to mine cryptocurrency.

The OWASP LLM Top 10 covers none of these risks, though unsecured AI islands missing up-to-date firewalls, zone separation, or adequate access control are easy prey for cybercriminals. Luckily, CISOs understand these risks and typically have the necessary controls already in place to secure classic application workloads.

Outsourcing the toolchain and AI environments to SaaS providers does not eliminate these threats 100% because SaaS providers’ services are also not always perfect.

Security firm Wiz has shown that even well-known AI-as-a-Service offerings such as SAP AI Core, Hugging Face, or Replicate had serious (fixed-now) security flaws, enabling malicious actors to bypass tenant restrictions and access the resources of other customers.

The LLM Top 10 only vaguely addressed these risks and subsumed them with many other topics under “supplier risk” (LLM-05).

To conclude, the OWASP LLM Top 10 is perfect for raising awareness of AI-related security topics. However, risk mitigation on the AI tooling and generic IT infrastructure layers is priority one to prevent attackers from effortlessly misusing resources for crypto mining or data exfiltration for blackmailing.

Deep-diving into the details of AI model attacks makes absolute sense and is necessary – in step two.

About the Author

Klaus Haller

Freelance Contributor, Data Center Knowledge

My passions are Cloud Security, AI, and Digital Transformation. In the daytime, I work as a Senior IT Security Architect. My areas of expertise span public clouds (Google Cloud Platform and Microsoft Azure) and how to secure them, technical project and project management, IT operations, and information management, analytics, and artificial intelligence.

Plus, I am a tech author working on articles in the early mornings or late evenings, reflecting and sharing my work experience. But most of all, I enjoy presenting and discussing with colleagues at conferences and workshops!

Order my book – "Managing AI in the Enterprise" – with Springer or Amazon, and become an even better AI line or project manager!

http://www.klaushaller.net/

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like