🎨 Data Designer Tutorial: Providing Images as Context for Vision-Based Data Generation¶
📚 What you'll learn¶
This notebook demonstrates how to provide images as context to generate text descriptions using vision-language models.
- ✨ Visual Document Processing: Converting images to chat-ready format for model consumption
- 🔍 Vision-Language Generation: Using vision models to generate detailed summaries from images
If this is your first time using Data Designer, we recommend starting with the first notebook in this tutorial series.
📦 Import Data Designer¶
data_designer.configprovides access to the configuration API.DataDesigneris the main interface for data generation.
# Standard library imports
import base64
import io
import uuid
# Third-party imports
import pandas as pd
import rich
from datasets import load_dataset
from IPython.display import display
from rich.panel import Panel
# Data Designer imports
import data_designer.config as dd
from data_designer.interface import DataDesigner
⚙️ Initialize the Data Designer interface¶
DataDesigneris the main object responsible for managing the data generation process.When initialized without arguments, the default model providers are used.
data_designer = DataDesigner()
🏗️ Initialize the Data Designer Config Builder¶
The Data Designer config defines the dataset schema and generation process.
The config builder provides an intuitive interface for building this configuration.
When initialized without arguments, the default model configurations are used.
config_builder = dd.DataDesignerConfigBuilder()
🌱 Seed Dataset Creation¶
In this section, we'll prepare our visual documents as a seed dataset for summarization:
- Loading Visual Documents: We use a small pets image dataset containing labeled images
- Image Processing: Convert images to base64 format for vision model consumption
- Metadata Extraction: Preserve relevant image information (label, etc.)
The seed dataset will be used to generate detailed text descriptions of each image.
# Dataset processing configuration
IMG_COUNT = 512 # Number of images to process
BASE64_IMAGE_HEIGHT = 512 # Standardized height for model input
# Load the pets dataset (train split, ~23 MB total)
img_dataset_cfg = {"path": "rokmr/pets", "split": "train"}
def resize_image(image, height: int):
"""
Resize image while maintaining aspect ratio.
Args:
image: PIL Image object
height: Target height in pixels
Returns:
Resized PIL Image object
"""
original_width, original_height = image.size
width = int(original_width * (height / original_height))
return image.resize((width, height))
def convert_image_to_chat_format(record, height: int) -> dict:
"""
Convert PIL image to base64 format for chat template usage.
Args:
record: Dataset record containing image and metadata
height: Target height for image resizing
Returns:
Updated record with base64_image and uuid fields
"""
image = resize_image(record["image"], height)
img_buffer = io.BytesIO()
image.save(img_buffer, format="PNG")
byte_data = img_buffer.getvalue()
base64_encoded_data = base64.b64encode(byte_data)
base64_string = base64_encoded_data.decode("utf-8")
return record | {"base64_image": base64_string, "uuid": str(uuid.uuid4())}
# Load and process the image dataset
print("📥 Loading and processing images...")
img_dataset = load_dataset(**img_dataset_cfg).map(
convert_image_to_chat_format, fn_kwargs={"height": BASE64_IMAGE_HEIGHT}
)
img_dataset = pd.DataFrame(img_dataset[:IMG_COUNT])
print(f"✅ Loaded {len(img_dataset)} images with columns: {list(img_dataset.columns)}")
📥 Loading and processing images...
Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.
[03:30:35] [WARNING] Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads.
✅ Loaded 512 images with columns: ['image', 'label', 'base64_image', 'uuid']
img_dataset.head()
| image | label | base64_image | uuid | |
|---|---|---|---|---|
| 0 | <PIL.JpegImagePlugin.JpegImageFile image mode=... | 0 | iVBORw0KGgoAAAANSUhEUgAAAeQAAAIACAIAAADc8YinAA... | cebf1d4b-0685-4b1b-bb10-5a4b51bbe575 |
| 1 | <PIL.JpegImagePlugin.JpegImageFile image mode=... | 0 | iVBORw0KGgoAAAANSUhEUgAAAiQAAAIACAIAAAA9rOAHAA... | 74df2cfb-01cf-460b-a1d7-7dcab735fa99 |
| 2 | <PIL.JpegImagePlugin.JpegImageFile image mode=... | 0 | iVBORw0KGgoAAAANSUhEUgAAAqoAAAIACAIAAADFYNm1AA... | 021ff8b0-5054-46a7-917a-6646dc4e32d3 |
| 3 | <PIL.JpegImagePlugin.JpegImageFile image mode=... | 0 | iVBORw0KGgoAAAANSUhEUgAAAwAAAAIACAIAAAC6lJxtAA... | 13c7295b-3250-4dd6-a8eb-b4a876370a57 |
| 4 | <PIL.PngImagePlugin.PngImageFile image mode=RG... | 0 | iVBORw0KGgoAAAANSUhEUgAAAqoAAAIACAIAAADFYNm1AA... | ed2280a0-544a-4807-b50f-4e1a57a81264 |
# Add the seed dataset containing our processed images
df_seed = pd.DataFrame(img_dataset)[["uuid", "label", "base64_image"]]
config_builder.with_seed_dataset(dd.DataFrameSeedSource(df=df_seed))
DataDesignerConfigBuilder( seed_dataset: df seed )
# Add a column to generate detailed image descriptions
config_builder.add_column(
dd.LLMTextColumnConfig(
name="description",
model_alias="nvidia-vision",
prompt=(
"Provide a detailed description of the content in this image in Markdown format. "
"Describe the main subject, background, colors, and any notable details."
),
multi_modal_context=[dd.ImageContext(column_name="base64_image")],
)
)
data_designer.validate(config_builder)
[03:32:12] [INFO] ✅ Validation passed
🔁 Iteration is key – preview the dataset!¶
Use the
previewmethod to generate a sample of records quickly.Inspect the results for quality and format issues.
Adjust column configurations, prompts, or parameters as needed.
Re-run the preview until satisfied.
preview = data_designer.preview(config_builder, num_records=2)
[03:32:12] [INFO] 📸 Preview generation in progress
[03:32:12] [INFO] |-- 🔒 Jinja rendering engine: secure
[03:32:12] [INFO] ✅ Validation passed
[03:32:12] [INFO] ⛓️ Sorting column configs into a Directed Acyclic Graph
[03:32:12] [INFO] 🩺 Running health checks for models...
[03:32:12] [INFO] |-- 👀 Checking 'nvidia/nemotron-nano-12b-v2-vl' in provider named 'nvidia' for model alias 'nvidia-vision'...
[03:32:12] [INFO] |-- ✅ Passed!
[03:32:12] [INFO] 🌱 Sampling 2 records from seed dataset
[03:32:12] [INFO] |-- seed dataset size: 512 records
[03:32:12] [INFO] |-- sampling strategy: ordered
[03:32:12] [INFO] 📝 llm-text model config for column 'description'
[03:32:12] [INFO] |-- model: 'nvidia/nemotron-nano-12b-v2-vl'
[03:32:12] [INFO] |-- model alias: 'nvidia-vision'
[03:32:12] [INFO] |-- model provider: 'nvidia'
[03:32:12] [INFO] |-- inference parameters:
[03:32:12] [INFO] | |-- generation_type=chat-completion
[03:32:12] [INFO] | |-- max_parallel_requests=4
[03:32:12] [INFO] | |-- temperature=0.85
[03:32:12] [INFO] | |-- top_p=0.95
[03:32:12] [INFO] ⚡️ Processing llm-text column 'description' with 4 concurrent workers
[03:32:12] [INFO] ⏱️ llm-text column 'description' will report progress after each record
[03:32:14] [INFO] |-- 🌗 llm-text column 'description' progress: 1/2 (50%) complete, 1 ok, 0 failed, 0.63 rec/s, eta 1.6s
[03:32:15] [INFO] |-- 🌕 llm-text column 'description' progress: 2/2 (100%) complete, 2 ok, 0 failed, 0.75 rec/s, eta 0.0s
[03:32:15] [INFO] 📊 Model usage summary:
[03:32:15] [INFO] |-- model: nvidia/nemotron-nano-12b-v2-vl
[03:32:15] [INFO] |-- tokens: input=606, output=202, total=808, tps=267
[03:32:15] [INFO] |-- requests: success=2, failed=0, total=2, rpm=39
[03:32:15] [INFO] 📐 Measuring dataset column statistics:
[03:32:15] [INFO] |-- 📝 column: 'description'
[03:32:15] [INFO] 🎆 Preview complete!
# Run this cell multiple times to cycle through the 2 preview records.
preview.display_sample_record()
Seed Columns ┏━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Value ┃ ┡━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ uuid │ cebf1d4b-0685-4b1b-bb10-5a4b51bbe575 │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────────────────┤ │ label │ 0 │ ├──────────────┼─────────────────────────────────────────────────────────────────────────────────────────────┤ │ base64_image │ iVBORw0KGgoAAAANSUhEUgAAAeQAAAIACAIAAADc8YinAAEAAElEQVR4nOy9V5ckuZEmamZwEREpSna1YAv28JLDHT… │ └──────────────┴─────────────────────────────────────────────────────────────────────────────────────────────┘ Generated Columns ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Value ┃ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ description │ I am just a virtual assistant, I don't have the ability to see an image, you need to provide │ │ │ me the information in order for me to provide you with a description. │ └─────────────┴──────────────────────────────────────────────────────────────────────────────────────────────┘
# The preview dataset is available as a pandas DataFrame.
preview.dataset
| uuid | label | base64_image | description | |
|---|---|---|---|---|
| 0 | cebf1d4b-0685-4b1b-bb10-5a4b51bbe575 | 0 | iVBORw0KGgoAAAANSUhEUgAAAeQAAAIACAIAAADc8YinAA... | I am just a virtual assistant, I don't have th... |
| 1 | 74df2cfb-01cf-460b-a1d7-7dcab735fa99 | 0 | iVBORw0KGgoAAAANSUhEUgAAAiQAAAIACAIAAAA9rOAHAA... | ```markdown\n
──────────────────────────────────────── 🎨 Data Designer Dataset Profile ───────────────────────────────────────── Dataset Overview ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ number of records ┃ number of columns ┃ percent complete records ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ 2 │ 1 │ 100.0% │ └─────────────────────────────────┴─────────────────────────────────┴─────────────────────────────────────────────┘ 📝 LLM-Text Columns ┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ ┃ ┃ ┃ prompt tokens ┃ completion tokens ┃ ┃ column name ┃ data type ┃ number unique values ┃ per record ┃ per record ┃ ┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ description │ string │ 2 (100.0%) │ 29.0 +/- 0.0 │ 100.5 +/- 91.2 │ └──────────────────┴───────────────┴──────────────────────────────┴─────────────────────┴─────────────────────────┘ ╭────────────────────────────────────────────────── Table Notes ──────────────────────────────────────────────────╮ │ │ │ 1. All token statistics are based on a sample of max(1000, len(dataset)) records. │ │ 2. Tokens are calculated using tiktoken's cl100k_base tokenizer. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
🔎 Visual Inspection¶
Let's compare the original image with the generated description to validate quality:
# Compare original image with generated description
index = 0 # Change this to view different examples
# Merge preview data with original images for comparison
comparison_dataset = preview.dataset.merge(pd.DataFrame(img_dataset)[["uuid", "image"]], how="left", on="uuid")
# Extract the record for display
record = comparison_dataset.iloc[index]
print("📄 Original Image:")
display(resize_image(record.image, BASE64_IMAGE_HEIGHT))
print("\n📝 Generated Description:")
rich.print(Panel(record.description, title="Image Description", title_align="left"))
📄 Original Image:
📝 Generated Description:
╭─ Image Description ─────────────────────────────────────────────────────────────────────────────────────────────╮ │ I am just a virtual assistant, I don't have the ability to see an image, you need to provide me the information │ │ in order for me to provide you with a description. │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
🆙 Scale up!¶
Happy with your preview data?
Use the
createmethod to submit larger Data Designer generation jobs.
results = data_designer.create(config_builder, num_records=10, dataset_name="tutorial-4")
[03:32:15] [INFO] 🎨 Creating Data Designer dataset
[03:32:15] [INFO] |-- 🔒 Jinja rendering engine: secure
[03:32:16] [INFO] ✅ Validation passed
[03:32:16] [INFO] ⛓️ Sorting column configs into a Directed Acyclic Graph
[03:32:16] [INFO] 🩺 Running health checks for models...
[03:32:16] [INFO] |-- 👀 Checking 'nvidia/nemotron-nano-12b-v2-vl' in provider named 'nvidia' for model alias 'nvidia-vision'...
[03:32:16] [INFO] |-- ✅ Passed!
[03:32:16] [INFO] ⏳ Processing batch 1 of 1
[03:32:16] [INFO] 🌱 Sampling 10 records from seed dataset
[03:32:16] [INFO] |-- seed dataset size: 512 records
[03:32:16] [INFO] |-- sampling strategy: ordered
[03:32:16] [INFO] 📝 llm-text model config for column 'description'
[03:32:16] [INFO] |-- model: 'nvidia/nemotron-nano-12b-v2-vl'
[03:32:16] [INFO] |-- model alias: 'nvidia-vision'
[03:32:16] [INFO] |-- model provider: 'nvidia'
[03:32:16] [INFO] |-- inference parameters:
[03:32:16] [INFO] | |-- generation_type=chat-completion
[03:32:16] [INFO] | |-- max_parallel_requests=4
[03:32:16] [INFO] | |-- temperature=0.85
[03:32:16] [INFO] | |-- top_p=0.95
[03:32:16] [INFO] ⚡️ Processing llm-text column 'description' with 4 concurrent workers
[03:32:16] [INFO] ⏱️ llm-text column 'description' will report progress after each record
[03:32:17] [INFO] |-- 🚶 llm-text column 'description' progress: 1/10 (10%) complete, 1 ok, 0 failed, 0.68 rec/s, eta 13.2s
[03:32:18] [INFO] |-- 🚶 llm-text column 'description' progress: 2/10 (20%) complete, 2 ok, 0 failed, 0.92 rec/s, eta 8.7s
[03:32:18] [INFO] |-- 🐴 llm-text column 'description' progress: 3/10 (30%) complete, 3 ok, 0 failed, 1.25 rec/s, eta 5.6s
[03:32:19] [INFO] |-- 🐴 llm-text column 'description' progress: 4/10 (40%) complete, 4 ok, 0 failed, 1.32 rec/s, eta 4.5s
[03:32:21] [INFO] |-- 🚗 llm-text column 'description' progress: 5/10 (50%) complete, 5 ok, 0 failed, 1.08 rec/s, eta 4.6s
[03:32:21] [INFO] |-- 🚗 llm-text column 'description' progress: 6/10 (60%) complete, 6 ok, 0 failed, 1.23 rec/s, eta 3.2s
[03:32:22] [INFO] |-- 🚗 llm-text column 'description' progress: 7/10 (70%) complete, 7 ok, 0 failed, 1.27 rec/s, eta 2.4s
[03:32:23] [INFO] |-- ✈️ llm-text column 'description' progress: 8/10 (80%) complete, 8 ok, 0 failed, 1.09 rec/s, eta 1.8s
[03:32:24] [INFO] |-- ✈️ llm-text column 'description' progress: 9/10 (90%) complete, 9 ok, 0 failed, 1.17 rec/s, eta 0.9s
[03:32:25] [INFO] |-- 🚀 llm-text column 'description' progress: 10/10 (100%) complete, 10 ok, 0 failed, 1.18 rec/s, eta 0.0s
[03:32:25] [INFO] 📊 Model usage summary:
[03:32:25] [INFO] |-- model: nvidia/nemotron-nano-12b-v2-vl
[03:32:25] [INFO] |-- tokens: input=22998, output=1849, total=24847, tps=2800
[03:32:25] [INFO] |-- requests: success=10, failed=0, total=10, rpm=67
[03:32:25] [INFO] 📐 Measuring dataset column statistics:
[03:32:25] [INFO] |-- 📝 column: 'description'
# Load the generated dataset as a pandas DataFrame.
dataset = results.load_dataset()
dataset.head()
| uuid | label | base64_image | description | |
|---|---|---|---|---|
| 0 | cebf1d4b-0685-4b1b-bb10-5a4b51bbe575 | 0 | iVBORw0KGgoAAAANSUhEUgAAAeQAAAIACAIAAADc8YinAA... | I am not able to provide the requested image d... |
| 1 | 74df2cfb-01cf-460b-a1d7-7dcab735fa99 | 0 | iVBORw0KGgoAAAANSUhEUgAAAiQAAAIACAIAAAA9rOAHAA... | ```| Feature | Description |------------... |
| 2 | 021ff8b0-5054-46a7-917a-6646dc4e32d3 | 0 | iVBORw0KGgoAAAANSUhEUgAAAqoAAAIACAIAAADFYNm1AA... | ### Image Description The image depicts a dom... |
| 3 | 13c7295b-3250-4dd6-a8eb-b4a876370a57 | 0 | iVBORw0KGgoAAAANSUhEUgAAAwAAAAIACAIAAAC6lJxtAA... | 
analysis.to_report()
──────────────────────────────────────── 🎨 Data Designer Dataset Profile ───────────────────────────────────────── Dataset Overview ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ number of records ┃ number of columns ┃ percent complete records ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ 10 │ 1 │ 100.0% │ └─────────────────────────────────┴─────────────────────────────────┴─────────────────────────────────────────────┘ 📝 LLM-Text Columns ┏━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ ┃ ┃ ┃ prompt tokens ┃ completion tokens ┃ ┃ column name ┃ data type ┃ number unique values ┃ per record ┃ per record ┃ ┡━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ description │ string │ 10 (100.0%) │ 29.0 +/- 0.0 │ 149.5 +/- 109.4 │ └──────────────────┴───────────────┴──────────────────────────────┴─────────────────────┴─────────────────────────┘ ╭────────────────────────────────────────────────── Table Notes ──────────────────────────────────────────────────╮ │ │ │ 1. All token statistics are based on a sample of max(1000, len(dataset)) records. │ │ 2. Tokens are calculated using tiktoken's cl100k_base tokenizer. │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
⏭️ Next Steps¶
Now that you've learned how to use visual context for image summarization in Data Designer, explore more:
Experiment with different vision models for specific image types
Try different prompt variations to generate specialized descriptions (e.g., technical details, key findings)
Combine vision-based descriptions with other column types for multi-modal workflows
Apply this pattern to other vision tasks like image captioning, OCR validation, or visual question answering
Generating images with Data Designer