Built-in Actions#
This section describes the default actions included in the NeMo Guardrails toolkit.
Core Actions#
These actions are fundamental to the guardrails process:
Action |
Description |
|---|---|
|
Generate the canonical form for the user utterance |
|
Generate the next step in the conversation flow |
|
Generate a bot message based on the desired intent |
|
Retrieve relevant chunks from the knowledge base |
generate_user_intent#
Converts raw user input into a canonical intent form:
# Automatically called during guardrails process
# Input: "Hello there!"
# Output: express greeting
generate_next_step#
Determines what the bot should do next:
# Automatically called to decide next action
# Output: bot express greeting, execute some_action, etc.
generate_bot_message#
Generates the actual bot response text:
# Converts intent to natural language
# Input: bot express greeting
# Output: "Hello! How can I help you today?"
retrieve_relevant_chunks#
Retrieves context from the knowledge base:
# Retrieves relevant documents for RAG
# Result stored in $relevant_chunks context variable
Guardrail-Specific Actions#
These actions implement built-in guardrails:
Action |
Description |
|---|---|
|
Check if user input should be allowed |
|
Check if bot response should be allowed |
|
Verify factual accuracy of bot response |
|
Detect hallucinations in bot response |
self_check_input#
Validates user input against configured policies:
# config.yml
rails:
input:
flows:
- self check input
# rails/input.co
define flow self check input
$allowed = execute self_check_input
if not $allowed
bot refuse to respond
stop
self_check_output#
Validates bot output against configured policies:
# config.yml
rails:
output:
flows:
- self check output
# rails/output.co
define flow self check output
$allowed = execute self_check_output
if not $allowed
bot refuse to respond
stop
self_check_facts#
Verifies facts against retrieved knowledge base chunks:
# config.yml
rails:
output:
flows:
- self check facts
self_check_hallucination#
Detects hallucinated content in bot responses:
# config.yml
rails:
output:
flows:
- self check hallucination
LangChain Tool Wrappers#
The toolkit includes wrappers for popular LangChain tools:
Action |
Description |
Requirements |
|---|---|---|
|
Web scraping and automation |
Apify API key |
|
Bing Web Search |
Bing API key |
|
Google Search |
Google API key |
|
Searx search engine |
Searx instance |
|
SerpApi Google Search |
SerpApi key |
|
Weather information |
OpenWeatherMap API key |
|
SerpAPI search |
SerpApi key |
|
Wikipedia information |
None |
|
Math and science queries |
Wolfram Alpha API key |
|
Zapier automation |
Zapier NLA API key |
Using LangChain Tools#
define flow answer with search
user ask about current events
$results = execute google_search(query=$user_query)
bot provide search results
Wikipedia Example#
define flow answer with wikipedia
user ask about historical facts
$info = execute wikipedia_query(query=$user_query)
bot provide information
Sensitive Data Detection Actions#
Action |
Description |
|---|---|
|
Detect PII in text |
|
Mask detected PII |
detect_sensitive_data#
# config.yml
rails:
config:
sensitive_data_detection:
input:
entities:
- PERSON
- EMAIL_ADDRESS
- PHONE_NUMBER
define flow check input sensitive data
$has_pii = execute detect_sensitive_data
if $has_pii
bot refuse to respond
stop
mask_sensitive_data#
define flow mask input sensitive data
$masked_input = execute mask_sensitive_data
# Continue with masked input
Content Safety Actions#
Action |
Description |
|---|---|
|
LlamaGuard input moderation |
|
LlamaGuard output moderation |
|
NVIDIA content safety model |
LlamaGuard Example#
# config.yml
rails:
input:
flows:
- llama guard check input
output:
flows:
- llama guard check output
Jailbreak Detection Actions#
Action |
Description |
|---|---|
|
Detect jailbreak attempts |
# config.yml
rails:
input:
flows:
- check jailbreak
Using Built-in Actions in Custom Flows#
You can combine built-in actions with custom logic:
define flow enhanced_input_check
# First, check for jailbreak
$is_jailbreak = execute check_jailbreak
if $is_jailbreak
bot refuse to respond
stop
# Then, check for sensitive data
$has_pii = execute detect_sensitive_data
if $has_pii
bot ask to remove sensitive data
stop
# Finally, run self-check
$allowed = execute self_check_input
if not $allowed
bot refuse to respond
stop