Core Configuration#
This section describes the config.yml file schema used to configure the NeMo Guardrails toolkit.
The config.yml file is the primary configuration file for defining LLM models, guardrails behavior, prompts, knowledge base settings, and tracing options.
Overview#
The following is a complete schema for a config.yml file:
# LLM model configuration
models:
- type: main
engine: openai
model: gpt-3.5-turbo-instruct
# Instructions for the LLM (similar to system prompts)
instructions:
- type: general
content: |
You are a helpful AI assistant.
# Guardrails configuration
rails:
input:
flows:
- self check input
output:
flows:
- self check output
# Prompt customization
prompts:
- task: self_check_input
content: |
Your task is to check if the user message complies with policy.
# Knowledge base settings
knowledge_base:
embedding_search_provider:
name: default
# Tracing and monitoring
tracing:
enabled: true
adapters:
- name: FileSystem
filepath: "./logs/traces.jsonl"
Configuration Sections#
The following sections provide detailed documentation for each configuration area:
Configure LLM providers, embedding models, and task-specific models in the config.yml file.
Configure input, output, dialog, retrieval, and execution rails in config.yml to control LLM behavior.
Customize prompts for LLM tasks including self-check input/output, fact checking, and intent generation.
Configure tracing adapters (FileSystem, OpenTelemetry) to monitor and debug guardrails interactions.
File Organization#
Configuration files are typically organized in a config folder:
.
├── config
│ ├── config.yml # Main configuration file
│ ├── prompts.yml # Custom prompts (optional)
│ ├── rails/ # Colang flow definitions
│ │ ├── input.co
│ │ ├── output.co
│ │ └── ...
│ ├── kb/ # Knowledge base documents
│ │ ├── doc1.md
│ │ └── ...
│ ├── actions.py # Custom actions (optional)
│ └── config.py # Custom initialization (optional)
For detailed information about each configuration section, refer to the individual pages linked above.