Skip to content

Have It Your Way: Customizing Data Designer with Plugins

A plugin framework for the custom pieces every real project ends up needing

Data Designer plugin extensions

Data Designer is built around a simple idea: describe the dataset you want, and let the framework handle execution. A config points to seed data, defines generated columns, picks models, and shapes the final records — no orchestration code required. Data Designer plugins keep that promise when a project needs something custom.

As of Data Designer v0.6.0, plugins are out of experimental mode and stable. They are the supported path for turning reusable project-specific logic into normal Data Designer components.

What does "something custom" actually look like? Picture a robotics team sitting on a pile of Isaac Sim-generated warehouse runs, trying to turn robot poses, camera views, and event metadata into instruction data. With an internal simulation-log plugin, the user-facing part can still be this small:

uv pip install data-designer-isaac-logs
from data_designer_isaac_logs.config import IsaacRunSeedSource
from data_designer_isaac_logs.config import WarehouseEventLabelColumnConfig
from data_designer_isaac_logs.config import RobotSFTProcessor

config_builder.with_seed_dataset(
    IsaacRunSeedSource(
        run_dir="s3://warehouse-sim/rare-events/",
        streams=("robot_pose", "overhead_rgb", "event_log"),
        max_events=10_000,
    )
)
config_builder.add_column(
    WarehouseEventLabelColumnConfig(
        name="safety_instruction",
        pose_column="robot_pose",
        event_log_column="event_log",
    )
)
config_builder.add_processor(RobotSFTProcessor(output_column="messages"))

That is the point of plugins: install a package, import its config classes, and keep the workflow declarative. The Isaac run reader, event labeler, and trainer-format processor own the project-specific parsing and trainer-facing shape. Data Designer still does the framework work, from component discovery and dependency ordering to model execution and output handling.


Customization Is the Normal Case

A confused engineer trying to fit custom building blocks into the wrong framework slots

The mess usually starts innocently. A team defines a Data Designer config, then discovers that its seed data lives in an internal layout, its generated column needs a domain simulator, and its trainer expects a slightly different record shape. Someone writes a small reader beside the notebook. Someone patches a generator into a project folder. Someone adds a cleanup script after preview because the final export has one more organization-specific rule. Each choice is reasonable because every project brings a different corpus, policy model, domain vocabulary, or training stack.

The problem is that the custom behavior now lives around Data Designer instead of inside the Data Designer workflow. It is harder to validate, harder to share, harder to version, and easier to lose. Plugins give that bespoke work a clean package boundary – a name, typed config, runtime implementation, entry point, and tests that travel together. Users still declare the dataset they want, but the local reader, domain generator, or trainer-format processor becomes a normal Data Designer component instead of another layer of glue.


Where Plugins Fit

The first plugin boundaries match the places where real projects most often need customization.

📥 Seed reader plugins bring new source systems into Data Designer. Use them for databases, document stores, object stores, internal APIs, file collections, or corpus layouts that need custom hydration before generation can begin.

🧬 Column generator plugins create new column types. Use them when a value should be produced during generation and should participate in dependency ordering like any other column. This is the right place for simulators, domain libraries, retrieval-backed generation, deterministic rule systems, or custom model-backed generation.

🔧 Processor plugins transform records before or after generation. Use them for redaction, cleanup, deduplication, export views, organization-specific schemas, or training formats that should not be hidden inside prompts.

These boundaries are intentionally narrow. A plugin should own the behavior that is specific to your use case. Data Designer validates configs and resolves dependencies. It plans batches, runs models, records logs, shows previews, then writes the output. That split lets custom components use the normal workflow without moving orchestration into the project.

What about custom columns? Start with a custom column when you are prototyping column-generator behavior or need a one-off column that only one project uses. Custom columns keep the logic in a Python function inside the config, with declared dependencies and optional model access. When that logic needs a stable config schema, tests, packaging, docs, or reuse across teams, promote it to a column generator plugin.


Author a Plugin: From Glue Code to Seed Reader

To make this concrete, let's walk through a full example. Consider a markdown seed reader. The one-off version might be a helper function that walks a directory, splits files into sections, returns a DataFrame, and then gets copied into the next project that needs it. That can work for one project. It becomes a problem when the reader needs options, tests, documentation, versioning, or reuse across teams. At that point, the helper has become a capability whether or not it is packaged like one.

A plugin packages that same helper as a small Python project:

  • A user-facing config class describes the options.
  • An implementation class does the work.
  • A Plugin object connects the config to the implementation.
  • An entry point registers the plugin with Data Designer.

The config class declares the user-facing options. For a directory-backed reader, Data Designer's FileSystemSeedSource already has fields for path, file_pattern, and recursive, we just need to define the seed type discriminator:

# config.py
from __future__ import annotations

from typing import Literal

from data_designer.config.seed_source import FileSystemSeedSource


class MarkdownSectionSeedSource(FileSystemSeedSource):
    """Configure the markdown sections seed reader."""

    seed_type: Literal["markdown-sections"] = "markdown-sections"

The implementation class is where the old helper code should move. For a filesystem seed reader, Data Designer gives you a small interface instead of a blank page: implement build_manifest(...) to build a cheap index of candidate inputs, and implement hydrate_row(...) to turn each selected manifest row into one or more dataset rows. That split matters because Data Designer can plan work against the lightweight manifest before paying the cost of reading files, parsing sections, or calling project-specific libraries. The parser can still be a normal helper function; the reader class is the framework boundary.

# impl.py
from __future__ import annotations

from pathlib import Path
from typing import Any, ClassVar

from data_designer.engine.resources.seed_reader import (
    FileSystemSeedReader,
    SeedReaderFileSystemContext,
)

from data_designer_markdown_sections.config import MarkdownSectionSeedSource


class MarkdownSectionSeedReader(FileSystemSeedReader[MarkdownSectionSeedSource]):
    output_columns: ClassVar[list[str]] = [
        "relative_path",
        "file_name",
        "section_index",
        "section_header",
        "section_content",
    ]

    def build_manifest(
        self,
        *,
        context: SeedReaderFileSystemContext,
    ) -> list[dict[str, str]]:
        # Fast path: enumerate candidate files and return cheap metadata.
        matched_paths = self.get_matching_relative_paths(
            context=context,
            file_pattern=self.source.file_pattern,
            recursive=self.source.recursive,
        )
        return [
            {"relative_path": relative_path, "file_name": Path(relative_path).name}
            for relative_path in matched_paths
        ]

    def hydrate_row(
        self,
        *,
        manifest_row: dict[str, Any],
        context: SeedReaderFileSystemContext,
    ) -> list[dict[str, Any]]:
        # Expensive path: hydrate only the selected manifest rows.
        # This is where parsing, fan-out, and source-specific cleanup belong.
        relative_path = str(manifest_row["relative_path"])
        file_name = str(manifest_row["file_name"])
        with context.fs.open(relative_path, "r", encoding="utf-8") as handle:
            markdown_text = handle.read()

        return [
            {
                "relative_path": relative_path,
                "file_name": file_name,
                "section_index": section_index,
                "section_header": section_header,
                "section_content": section_content,
            }
            for section_index, (section_header, section_content) in enumerate(
                extract_markdown_sections(markdown_text)
            )
        ]

The same rule applies to column generators and processors: choose the closest base class, keep options on the config object, implement the narrow runtime method, and leave orchestration out of the plugin.

Two small files connect the plugin to Data Designer — a Plugin descriptor that names the config and implementation, and a Python entry point that exposes them at install time:

# plugin.py
from data_designer.plugins import Plugin, PluginType

plugin = Plugin(
    config_qualified_name="data_designer_markdown_sections.config.MarkdownSectionSeedSource",
    impl_qualified_name="data_designer_markdown_sections.impl.MarkdownSectionSeedReader",
    plugin_type=PluginType.SEED_READER,
)
# pyproject.toml
[project.entry-points."data_designer.plugins"]
markdown-sections = "data_designer_markdown_sections.plugin:plugin"

After that, users do not import engine internals or run registration code. They import the config class and use it:

import data_designer.config as dd
from data_designer.interface import DataDesigner
from data_designer_markdown_sections.config import MarkdownSectionSeedSource

builder = dd.DataDesignerConfigBuilder()
builder.with_seed_dataset(
    MarkdownSectionSeedSource(
        path="docs/",
        file_pattern="*.md",
    )
)
builder.add_column(
    dd.LLMTextColumnConfig(
        name="question",
        model_alias="nvidia-text",
        prompt="Write a question about this section: {{ section_content }}",
    )
)

results = DataDesigner().preview(builder, num_records=5)

No custom orchestration. No separate DataFrame preparation step. The reader is part of the Data Designer workflow.


Building the Plugin Ecosystem

Reusable plugins also need a discovery layer. Once a plugin is useful beyond one project, users need a simple way to find the right package, install it, and get back to declaring datasets. That is why Data Designer includes a built-in NVIDIA plugin catalog and a CLI workflow for discovery and installation.

The NVIDIA catalog is backed by NVIDIA-NeMo/DataDesignerPlugins, a dedicated home for first-party plugin packages, packaging examples, and plugin-specific docs. Keeping those packages outside the core repository lets them carry optional dependencies, target narrower use cases, and move at their own pace while still using the same plugin interface once installed.

For users, the first-party path is short: list what is available, search for what you need, and install by package name or alias.

data-designer plugin list
data-designer plugin search <keyword>
data-designer plugin install <package-name>

After installation, there is no separate registration step. Data Designer discovers the package's entry points, so users import the plugin's config classes and keep building the same declarative workflow.

Catalogs are not limited to NVIDIA plugins. A platform group can publish a catalog of approved internal plugins backed by an internal package index or direct package references. A community can publish a catalog for a domain or workflow. The catalog gives users a trusted path to the plugins they prefer, while plugin packages remain independently versioned and distributed.

data-designer plugin catalog add <catalog-name> <catalog-url>
data-designer plugin --catalog <catalog-name> install <package-name>

This provides a foundation for a rich Data Designer plugin ecosystem: the core framework provides the stable runtime, plugin authors provide specialized capabilities, and catalogs make those capabilities discoverable. For more information, see Discover Plugins.


Where to Go Next

Interested in building your own plugin? Here are some resources to get you started:

  1. Plugins overview — learn how plugins fit into Data Designer
  2. Build Your Own — follow the authoring guide for seed readers, column generators, and processors
  3. Using Models in Plugins — call configured models from plugin code
  4. Markdown Section Seed Reader recipe — study the complete version of the example from this post
  5. Discover Plugins — learn how to discover and install plugins
  6. DataDesignerPlugins on GitHub — explore first-party plugin packages

Moving plugins out of experimental mode means Data Designer no longer has to predict every customization users will need. The framework provides the pipeline. Plugins supply the custom pieces.

🎨 🔌 Thanks for reading and happy plugin building!