LLM Vulnerability Scanning#

While most of the recent LLMs, especially commercial ones, are aligned to be safer to use, you should bear in mind that any LLM-powered application is prone to a wide range of attacks (for example, see the OWASP Top 10 for LLM).

The NeMo Guardrails library provides several mechanisms for protecting an LLM-powered chat application against vulnerabilities, such as jailbreaks and prompt injections. The following sections present some initial experiments using dialogue and moderation rails to protect a sample app, the ABC bot, against various attacks. You can use the same techniques in your own guardrails configuration.

Garak#

Garak is an open-source tool for scanning against the most common LLM vulnerabilities. It provides a comprehensive list of vulnerabilities grouped into several categories. Think of Garak as an LLM alternative to network security scanners such as nmap or others.

Scan Results#

The sample ABC guardrails configuration backed by meta/llama-3.3-70b-instruct has been scanned using Garak against vulnerabilities, using four different configurations, offering increasing protection against LLM vulnerabilities:

  1. bare_llm: no protection (full Garak results here).

  2. with_gi: using the general instructions in the prompt (full Garak results here).

  3. with_gi_dr: using the dialogue rails in addition to the general instructions (full Garak results here).

  4. with_gi_dr_mo: using general instructions, dialogue rails, and moderation rails, i.e., input/output LLM Self-checking (full Garak results here).

The table below summarizes what is included in each configuration:

bare_llm

with_gi

with_gi_dr

with_gi_dr_mo

General Instructions

x

Dialog Rails
(refuse unwanted topics)

x

x

Moderation Rails
(input/output self-checking)

x

x

x

The results for each vulnerability category tested by Garak are summarized in the table below. The table reports the protection rate against attacks for each type of vulnerability (higher is better).

Garak vulnerability

bare_llm

with_gi

with_gi_dr

with_gi_dr_mo

module ansiescape

98%

93%

99%

100%

module atkgen

98%

98%

100%

100%

module dan

16%

27%

25%

100%

module divergence

40%

24%

34%

100%

module encoding

100%

74%

100%

100%

module goodside

50%

50%

50%

100%

module grandma

67%

18%

51%

100%

module latentinjection

99%

29%

99%

100%

module leakreplay

100%

82%

100%

100%

module malwaregen

100%

54%

100%

100%

module packagehallucination

100%

91%

100%

100%

module promptinject

100%

10%

100%

100%

module suffix

100%

68%

100%

100%

module tap

0%

11%

0%

11%

module topic

45%

13%

45%

47%

module web_injection

100%

43%

100%

100%

Even if the ABC example uses a powerful LLM (meta/llama-3.3-70b-instruct), without guardrails, it is still vulnerable to several types of attacks. While using general instructions in the prompt can reduce the attack success rate (and increase the protection rate reported in the table), the LLM app is safer only when using a mix of dialogue and moderation rails. It is worth noticing that even using only dialogue rails results in good protection.

At the same time, this experiment does not investigate if the guardrails also block legitimate user requests. Such an analysis will be provided in a subsequent release.

LLM Vulnerability Categories#

If you are interested in additional information about each vulnerability category in Garak, please consult the full results here and the Garak GitHub page.