🎨 Data Designer Tutorial: Image-to-Image Editing¶
📚 What you'll learn¶
This notebook shows how to chain image generation columns: first generate animal portraits from text, then edit those generated images by adding accessories and changing styles—all without loading external datasets.
- 🖼️ Text-to-image generation: Generate images from text prompts
- 🔗 Chaining image columns: Use
ImageContextto pass generated images to a follow-up editing column - 🎲 Sampler-driven diversity: Combine sampled accessories and settings for varied edits
This tutorial uses an autoregressive model (one that supports both text-to-image and image-to-image generation via the chat completions API). Diffusion models (DALL·E, Stable Diffusion, etc.) do not support image context—see Tutorial 5 for text-to-image generation with diffusion models.
Prerequisites: This tutorial uses OpenRouter with the Flux 2 Pro model. Set
OPENROUTER_API_KEYin your environment before running.
If this is your first time using Data Designer, we recommend starting with the first notebook in this tutorial series.
📦 Import Data Designer¶
data_designer.configprovides the configuration API.DataDesigneris the main interface for generation.
import base64
from pathlib import Path
from IPython.display import Image as IPImage
from IPython.display import display
import data_designer.config as dd
from data_designer.interface import DataDesigner
⚙️ Initialize the Data Designer interface¶
We initialize Data Designer without arguments here—the image model is configured explicitly in the next cell.
data_designer = DataDesigner()
🎛️ Define an image model¶
We need an autoregressive model that supports both text-to-image and image-to-image generation via the chat completions API. This lets us generate images from text and then pass those images as context for editing.
- Use
ImageInferenceParamsso Data Designer treats this model as an image generator. - Image-specific options are model-dependent; pass them via
extra_body.
Note: This tutorial uses the Flux 2 Pro model via OpenRouter. Set
OPENROUTER_API_KEYin your environment.
MODEL_PROVIDER = "openrouter"
MODEL_ID = "black-forest-labs/flux.2-pro"
MODEL_ALIAS = "image-model"
model_configs = [
dd.ModelConfig(
alias=MODEL_ALIAS,
model=MODEL_ID,
provider=MODEL_PROVIDER,
inference_parameters=dd.ImageInferenceParams(
extra_body={"height": 512, "width": 512},
),
)
]
🏗️ Build the configuration¶
We chain two image generation columns:
- Sampler columns — randomly sample animal types, accessories, settings, and art styles
- First image column — generate an animal portrait from a text prompt
- Second image column with context — edit the generated portrait using
ImageContext
config_builder = dd.DataDesignerConfigBuilder(model_configs=model_configs)
# 1. Sampler columns for diversity
config_builder.add_column(
dd.SamplerColumnConfig(
name="animal",
sampler_type=dd.SamplerType.CATEGORY,
params=dd.CategorySamplerParams(
values=["cat", "dog", "fox", "owl", "rabbit", "panda"],
),
)
)
config_builder.add_column(
dd.SamplerColumnConfig(
name="accessory",
sampler_type=dd.SamplerType.CATEGORY,
params=dd.CategorySamplerParams(
values=[
"a tiny top hat",
"oversized sunglasses",
"a red bow tie",
"a knitted beanie",
"a flower crown",
"a monocle and mustache",
"a pirate hat and eye patch",
"a chef hat",
],
),
)
)
config_builder.add_column(
dd.SamplerColumnConfig(
name="setting",
sampler_type=dd.SamplerType.CATEGORY,
params=dd.CategorySamplerParams(
values=[
"a cozy living room",
"a sunny park",
"a photo studio with soft lighting",
"a red carpet event",
"a holiday card backdrop with snowflakes",
"a tropical beach at sunset",
],
),
)
)
config_builder.add_column(
dd.SamplerColumnConfig(
name="art_style",
sampler_type=dd.SamplerType.CATEGORY,
params=dd.CategorySamplerParams(
values=[
"a photorealistic style",
"a Disney Pixar 3D render",
"a watercolor painting",
"a pop art poster",
],
),
)
)
# 2. Generate animal portrait from text
config_builder.add_column(
dd.ImageColumnConfig(
name="animal_portrait",
prompt="A close-up portrait photograph of a {{ animal }} looking at the camera, studio lighting, high quality.",
model_alias=MODEL_ALIAS,
)
)
# 3. Edit the generated portrait
config_builder.add_column(
dd.ImageColumnConfig(
name="edited_portrait",
prompt=(
"Edit this {{ animal }} portrait photo. "
"Add {{ accessory }} on the animal. "
"Place the {{ animal }} in {{ setting }}. "
"Render the result in {{ art_style }}. "
"Keep the animal's face, expression, and features faithful to the original photo."
),
model_alias=MODEL_ALIAS,
multi_modal_context=[dd.ImageContext(column_name="animal_portrait")],
)
)
data_designer.validate(config_builder)
[12:19:07] [INFO] ✅ Validation passed
🔁 Preview: quick iteration¶
In preview mode, generated images are stored as base64 strings in the dataframe. Use this to iterate on your prompts, accessories, and sampler values before scaling up.
preview = data_designer.preview(config_builder, num_records=2)
[12:19:07] [INFO] 👁️ Preview generation in progress
[12:19:07] [INFO] ✅ Validation passed
[12:19:08] [INFO] ⛓️ Sorting column configs into a Directed Acyclic Graph
[12:19:08] [INFO] 🩺 Running health checks for models...
[12:19:08] [INFO] |-- 👀 Checking 'black-forest-labs/flux.2-pro' in provider named 'openrouter' for model alias 'image-model'...
[12:19:17] [INFO] |-- ✅ Passed!
[12:19:17] [INFO] 🎲 Preparing samplers to generate 2 records across 4 columns
[12:19:17] [INFO] 🖼️ image model config for column 'animal_portrait'
[12:19:17] [INFO] |-- model: 'black-forest-labs/flux.2-pro'
[12:19:17] [INFO] |-- model alias: 'image-model'
[12:19:17] [INFO] |-- model provider: 'openrouter'
[12:19:17] [INFO] |-- inference parameters:
[12:19:17] [INFO] | |-- generation_type=image
[12:19:17] [INFO] | |-- max_parallel_requests=4
[12:19:17] [INFO] | |-- extra_body={'height': 512, 'width': 512}
[12:19:17] [INFO] ⚡️ Processing image column 'animal_portrait' with 4 concurrent workers
[12:19:17] [INFO] ⏱️ image column 'animal_portrait' will report progress after each record
[12:19:25] [INFO] |-- 😐 image column 'animal_portrait' progress: 1/2 (50%) complete, 1 ok, 0 failed, 0.13 rec/s, eta 7.9s
[12:19:27] [INFO] |-- 🤩 image column 'animal_portrait' progress: 2/2 (100%) complete, 2 ok, 0 failed, 0.20 rec/s, eta 0.0s
[12:19:27] [INFO] 🖼️ image model config for column 'edited_portrait'
[12:19:27] [INFO] |-- model: 'black-forest-labs/flux.2-pro'
[12:19:27] [INFO] |-- model alias: 'image-model'
[12:19:27] [INFO] |-- model provider: 'openrouter'
[12:19:27] [INFO] |-- inference parameters:
[12:19:27] [INFO] | |-- generation_type=image
[12:19:27] [INFO] | |-- max_parallel_requests=4
[12:19:27] [INFO] | |-- extra_body={'height': 512, 'width': 512}
[12:19:27] [INFO] ⚡️ Processing image column 'edited_portrait' with 4 concurrent workers
[12:19:27] [INFO] ⏱️ image column 'edited_portrait' will report progress after each record
[12:19:41] [INFO] |-- ⛅ image column 'edited_portrait' progress: 1/2 (50%) complete, 1 ok, 0 failed, 0.08 rec/s, eta 13.3s
[12:19:42] [INFO] |-- ☀️ image column 'edited_portrait' progress: 2/2 (100%) complete, 2 ok, 0 failed, 0.14 rec/s, eta 0.0s
[12:19:42] [INFO] 📊 Model usage summary:
[12:19:42] [INFO] |-- model: black-forest-labs/flux.2-pro
[12:19:42] [INFO] |-- tokens: input=0, output=0, total=0, tps=0
[12:19:42] [INFO] |-- requests: success=4, failed=0, total=4, rpm=9
[12:19:42] [INFO] |-- images: total=4
[12:19:42] [INFO] 📐 Measuring dataset column statistics:
[12:19:42] [INFO] |-- 🎲 column: 'animal'
[12:19:42] [INFO] |-- 🎲 column: 'accessory'
[12:19:42] [INFO] |-- 🎲 column: 'setting'
[12:19:42] [INFO] |-- 🎲 column: 'art_style'
[12:19:42] [INFO] |-- 🖼️ column: 'animal_portrait'
[12:19:42] [INFO] |-- 🖼️ column: 'edited_portrait'
[12:19:42] [INFO] 🎆 Preview complete!
for i in range(len(preview.dataset)):
preview.display_sample_record()
Generated Columns ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Value ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ animal │ owl │ ├──────────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤ │ accessory │ a tiny top hat │ ├──────────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤ │ setting │ a red carpet event │ ├──────────────────────────────────────┼─────────────────────────────────────────────────────────────────────┤ │ art_style │ a pop art poster │ └──────────────────────────────────────┴─────────────────────────────────────────────────────────────────────┘ Images ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Preview ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ animal_portrait │ [0] <base64, 1713836 chars> │ ├────────────────────────────────────────┼───────────────────────────────────────────────────────────────────┤ │ edited_portrait │ [0] <base64, 2175740 chars> │ └────────────────────────────────────────┴───────────────────────────────────────────────────────────────────┘
Generated Columns ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Value ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ animal │ fox │ ├────────────────────────────────┼───────────────────────────────────────────────────────────────────────────┤ │ accessory │ a tiny top hat │ ├────────────────────────────────┼───────────────────────────────────────────────────────────────────────────┤ │ setting │ a red carpet event │ ├────────────────────────────────┼───────────────────────────────────────────────────────────────────────────┤ │ art_style │ a Disney Pixar 3D render │ └────────────────────────────────┴───────────────────────────────────────────────────────────────────────────┘ Images ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Preview ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ animal_portrait │ [0] <base64, 1822088 chars> │ ├────────────────────────────────────────┼───────────────────────────────────────────────────────────────────┤ │ edited_portrait │ [0] <base64, 1800368 chars> │ └────────────────────────────────────────┴───────────────────────────────────────────────────────────────────┘
preview.dataset
| animal | accessory | setting | art_style | animal_portrait | edited_portrait | |
|---|---|---|---|---|---|---|
| 0 | owl | a tiny top hat | a red carpet event | a pop art poster | [iVBORw0KGgoAAAANSUhEUgAABAAAAAMACAIAAAA12IJaA... | [iVBORw0KGgoAAAANSUhEUgAABAAAAAMACAIAAAA12IJaA... |
| 1 | fox | a tiny top hat | a red carpet event | a Disney Pixar 3D render | [iVBORw0KGgoAAAANSUhEUgAABAAAAAMACAIAAAA12IJaA... | [iVBORw0KGgoAAAANSUhEUgAABAAAAAMACAIAAAA12IJaA... |
🔎 Compare original vs edited¶
Let's display the generated animal portraits next to their edited versions.
def display_image(image_value, base_path: Path | None = None) -> None:
"""Display an image from base64 (preview mode) or file path (create mode)."""
values = [image_value] if isinstance(image_value, str) else list(image_value)
for value in values:
if base_path is not None:
display(IPImage(filename=str(base_path / value)))
else:
display(IPImage(data=base64.b64decode(value)))
def display_before_after(row, index: int, base_path: Path | None = None) -> None:
"""Display original portrait vs edited version for a single record."""
print(f"\n{'=' * 60}")
print(f"Record {index}: {row['animal']} wearing {row['accessory']}")
print(f"Setting: {row['setting']}, Style: {row['art_style']}")
print(f"{'=' * 60}")
print("\n📷 Generated portrait:")
display_image(row["animal_portrait"], base_path)
print("\n🎨 Edited version:")
display_image(row["edited_portrait"], base_path)
for index, row in preview.dataset.iterrows():
display_before_after(row, index)
============================================================ Record 0: owl wearing a tiny top hat Setting: a red carpet event, Style: a pop art poster ============================================================ 📷 Generated portrait:
🎨 Edited version:
============================================================ Record 1: fox wearing a tiny top hat Setting: a red carpet event, Style: a Disney Pixar 3D render ============================================================ 📷 Generated portrait:
🎨 Edited version:
🆙 Create at scale¶
In create mode, images are saved to disk in images/<column_name>/ folders with UUID filenames. The dataframe stores relative paths. ImageContext auto-detection handles this transparently—generated file paths are resolved to base64 before being sent to the model for editing.
results = data_designer.create(config_builder, num_records=5, dataset_name="tutorial-6-edited-images")
[12:19:42] [INFO] 🎨 Creating Data Designer dataset
[12:19:42] [INFO] ✅ Validation passed
[12:19:42] [INFO] ⛓️ Sorting column configs into a Directed Acyclic Graph
[12:19:42] [INFO] 🩺 Running health checks for models...
[12:19:42] [INFO] |-- 👀 Checking 'black-forest-labs/flux.2-pro' in provider named 'openrouter' for model alias 'image-model'...
[12:19:51] [INFO] |-- ✅ Passed!
[12:19:51] [INFO] ⏳ Processing batch 1 of 1
[12:19:51] [INFO] 🎲 Preparing samplers to generate 5 records across 4 columns
[12:19:51] [INFO] 🖼️ image model config for column 'animal_portrait'
[12:19:51] [INFO] |-- model: 'black-forest-labs/flux.2-pro'
[12:19:51] [INFO] |-- model alias: 'image-model'
[12:19:51] [INFO] |-- model provider: 'openrouter'
[12:19:51] [INFO] |-- inference parameters:
[12:19:51] [INFO] | |-- generation_type=image
[12:19:51] [INFO] | |-- max_parallel_requests=4
[12:19:51] [INFO] | |-- extra_body={'height': 512, 'width': 512}
[12:19:51] [INFO] ⚡️ Processing image column 'animal_portrait' with 4 concurrent workers
[12:19:51] [INFO] ⏱️ image column 'animal_portrait' will report progress after each record
[12:20:00] [INFO] |-- 🥚 image column 'animal_portrait' progress: 1/5 (20%) complete, 1 ok, 0 failed, 0.11 rec/s, eta 35.1s
[12:20:00] [INFO] |-- 🐣 image column 'animal_portrait' progress: 2/5 (40%) complete, 2 ok, 0 failed, 0.21 rec/s, eta 14.4s
[12:20:00] [INFO] |-- 🐥 image column 'animal_portrait' progress: 3/5 (60%) complete, 3 ok, 0 failed, 0.31 rec/s, eta 6.4s
[12:20:01] [INFO] |-- 🐤 image column 'animal_portrait' progress: 4/5 (80%) complete, 4 ok, 0 failed, 0.41 rec/s, eta 2.5s
[12:20:10] [INFO] |-- 🐔 image column 'animal_portrait' progress: 5/5 (100%) complete, 5 ok, 0 failed, 0.27 rec/s, eta 0.0s
[12:20:10] [INFO] 🖼️ image model config for column 'edited_portrait'
[12:20:10] [INFO] |-- model: 'black-forest-labs/flux.2-pro'
[12:20:10] [INFO] |-- model alias: 'image-model'
[12:20:10] [INFO] |-- model provider: 'openrouter'
[12:20:10] [INFO] |-- inference parameters:
[12:20:10] [INFO] | |-- generation_type=image
[12:20:10] [INFO] | |-- max_parallel_requests=4
[12:20:10] [INFO] | |-- extra_body={'height': 512, 'width': 512}
[12:20:10] [INFO] ⚡️ Processing image column 'edited_portrait' with 4 concurrent workers
[12:20:10] [INFO] ⏱️ image column 'edited_portrait' will report progress after each record
[12:20:22] [INFO] |-- 😴 image column 'edited_portrait' progress: 1/5 (20%) complete, 1 ok, 0 failed, 0.08 rec/s, eta 50.2s
[12:20:25] [INFO] |-- 🥱 image column 'edited_portrait' progress: 2/5 (40%) complete, 2 ok, 0 failed, 0.13 rec/s, eta 23.4s
[12:20:25] [INFO] |-- 😐 image column 'edited_portrait' progress: 3/5 (60%) complete, 3 ok, 0 failed, 0.19 rec/s, eta 10.6s
[12:20:27] [INFO] |-- 😊 image column 'edited_portrait' progress: 4/5 (80%) complete, 4 ok, 0 failed, 0.24 rec/s, eta 4.2s
[12:20:45] [INFO] |-- 🤩 image column 'edited_portrait' progress: 5/5 (100%) complete, 5 ok, 0 failed, 0.14 rec/s, eta 0.0s
[12:20:45] [INFO] 📊 Model usage summary:
[12:20:45] [INFO] |-- model: black-forest-labs/flux.2-pro
[12:20:45] [INFO] |-- tokens: input=0, output=0, total=0, tps=0
[12:20:45] [INFO] |-- requests: success=10, failed=0, total=10, rpm=11
[12:20:45] [INFO] |-- images: total=10
[12:20:45] [INFO] 📐 Measuring dataset column statistics:
[12:20:45] [INFO] |-- 🎲 column: 'animal'
[12:20:45] [INFO] |-- 🎲 column: 'accessory'
[12:20:45] [INFO] |-- 🎲 column: 'setting'
[12:20:45] [INFO] |-- 🎲 column: 'art_style'
[12:20:45] [INFO] |-- 🖼️ column: 'animal_portrait'
[12:20:45] [INFO] |-- 🖼️ column: 'edited_portrait'
dataset = results.load_dataset()
dataset.head()
| animal | accessory | setting | art_style | animal_portrait | edited_portrait | |
|---|---|---|---|---|---|---|
| 0 | owl | a red bow tie | a red carpet event | a watercolor painting | ['images/animal_portrait/dff93e50-2905-4774-a3... | ['images/edited_portrait/0d9bb828-1a63-4b89-8f... |
| 1 | owl | a red bow tie | a sunny park | a photorealistic style | ['images/animal_portrait/342ca4af-8fe3-473e-98... | ['images/edited_portrait/85846f4f-a41b-4b96-b6... |
| 2 | cat | a monocle and mustache | a cozy living room | a watercolor painting | ['images/animal_portrait/61aaab70-5da8-4222-b2... | ['images/edited_portrait/42f96497-3e8d-42c3-91... |
| 3 | cat | a knitted beanie | a photo studio with soft lighting | a Disney Pixar 3D render | ['images/animal_portrait/01731a2d-7994-457c-90... | ['images/edited_portrait/81f38091-0b45-450e-9a... |
| 4 | fox | a red bow tie | a red carpet event | a photorealistic style | ['images/animal_portrait/fc983f1f-7158-4f03-bb... | ['images/edited_portrait/9443ffe4-bcd2-4191-8a... |
for index, row in dataset.head(10).iterrows():
display_before_after(row, index, base_path=results.artifact_storage.base_dataset_path)
============================================================ Record 0: owl wearing a red bow tie Setting: a red carpet event, Style: a watercolor painting ============================================================ 📷 Generated portrait:
🎨 Edited version:
============================================================ Record 1: owl wearing a red bow tie Setting: a sunny park, Style: a photorealistic style ============================================================ 📷 Generated portrait:
🎨 Edited version:
============================================================ Record 2: cat wearing a monocle and mustache Setting: a cozy living room, Style: a watercolor painting ============================================================ 📷 Generated portrait:
🎨 Edited version:
============================================================ Record 3: cat wearing a knitted beanie Setting: a photo studio with soft lighting, Style: a Disney Pixar 3D render ============================================================ 📷 Generated portrait:
🎨 Edited version:
============================================================ Record 4: fox wearing a red bow tie Setting: a red carpet event, Style: a photorealistic style ============================================================ 📷 Generated portrait:
🎨 Edited version:
⏭️ Next steps¶
- Experiment with different autoregressive models for image generation and editing
- Try more creative editing prompts (style transfer, background replacement, artistic filters)
- Combine image generation with text generation (e.g., generate captions using an LLM-Text column with
ImageContext) - Chain more than two image columns for multi-step editing pipelines
Related tutorials:
- The basics: samplers and LLM text columns
- Providing images as context: image-to-text with VLMs
- Generating images: text-to-image generation with diffusion models