NVIDIA NeMo Guardrails Library Developer Guide#
The NeMo Guardrails library is an open-source Python package for adding programmable guardrails to LLM-based applications. It intercepts inputs and outputs, applies configurable safety checks, and blocks or modifies content based on defined policies.
About the NeMo Guardrails Library#
Learn about the library and its capabilities in the following sections.
Add programmable guardrails to LLM applications with this open-source Python library.
Apply input, retrieval, dialog, execution, and output rails to protect LLM applications.
Learn the sequence diagrams and architecture for building guardrails.
Connect to NVIDIA NIM, OpenAI, Azure, Anthropic, HuggingFace, and LangChain providers.
Get Started#
Follow these steps to start using the NeMo Guardrails library.
Install NeMo Guardrails with pip, configure your environment, and verify the installation.
Follow hands-on tutorials to deploy Nemotron Content Safety, Nemotron Topic Control, and Nemotron Jailbreak Detect NIMs.
Next Steps#
Once you’ve completed the get-started tutorials, explore the following areas to deepen your understanding.
Learn to write config.yml, Colang flows, and custom actions for guardrails.
Use RailsConfig and LLMRails classes to load configurations and generate guarded responses.
Measure accuracy and performance of dialog, fact-checking, moderation, and hallucination rails.
Debug guardrails with verbose mode, explain method, and generation log options.
Deploy guardrails using the local server, Docker containers, or production microservices.
Integrate NeMo Guardrails with LangChain chains, runnables, and LangGraph workflows.