iBlueprint Blueprints are built from nodes, and each node type is designed for a specific kind of work. Choosing the right node type keeps your Blueprint readable and ensures you get the right behavior from the execution engine. The sections below group node types by purpose so you can find the right one quickly.Documentation Index
Fetch the complete documentation index at: https://docs.iblueprint.ai/llms.txt
Use this file to discover all available pages before exploring further.
- AI / LLM
- Data
- Logic
- Integrations
- Human
- Agent
These nodes call AI models to generate, transform, or analyse content. They are the core of most Blueprints.
prompt — Text generation
prompt — Text generation
The
prompt node sends a message to a language model and returns the generated text. Use it any time you need the model to write, summarise, classify, extract, translate, or reason over text.Example configuration| Field | Description |
|---|---|
model | The model to use, e.g. gpt-4o, claude-3-5-sonnet, gemini-1.5-pro |
systemPrompt | Sets the model’s persona and constraints |
userPrompt | The actual request sent to the model; supports {{variables}} |
image_generation — Generate images from text
image_generation — Generate images from text
The
image_generation node sends a text prompt to an image model (such as DALL·E or Stable Diffusion) and returns a generated image. Use it to create illustrations, product mockups, or visual assets on the fly.Key config fields: prompt, model, size, quality, n (number of images).image_editing — Edit or transform existing images
image_editing — Edit or transform existing images
The
image_editing node takes an input image and applies model-guided edits based on a text instruction. Use it to retouch photos, apply styles, or modify regions of an image.Key config fields: image (URL or base64), instruction, model, mask.image_variations — Generate variants of an image
image_variations — Generate variants of an image
The
image_variations node produces alternative versions of a source image while preserving its overall composition. Use it to explore creative directions or generate multiple options.Key config fields: image, n, size, model.vision — Analyse images with a multimodal model
vision — Analyse images with a multimodal model
The
vision node passes one or more images to a multimodal model alongside a text prompt. Use it to describe images, extract text from screenshots, or answer questions about visual content.Key config fields: images (array of URLs or base64), prompt, model.ocr — Extract text from images
ocr — Extract text from images
The
ocr node runs optical character recognition on an image and returns the extracted text. Use it to digitise scanned documents, receipts, or screenshots before feeding the text into a prompt node.Key config fields: image, language.speech_to_text — Transcribe audio
speech_to_text — Transcribe audio
The
speech_to_text node transcribes an audio file into text. Use it to process voice recordings, meeting audio, or podcast episodes before running downstream analysis.Key config fields: audio (URL or base64), model, language.text_to_speech — Convert text to audio
text_to_speech — Convert text to audio
The
text_to_speech node synthesises spoken audio from a text string. Use it to build audio summaries, voice interfaces, or accessibility features.Key config fields: text, voice, model, speed.video_analysis — Understand video content
video_analysis — Understand video content
The
video_analysis node sends a video (or keyframes) to a multimodal model for analysis. Use it to summarise video content, detect events, or extract structured information from recordings.Key config fields: video (URL), prompt, model.data_extraction — Pull structured data from text
data_extraction — Pull structured data from text
The
data_extraction node instructs a model to extract structured fields from unstructured text and return them as JSON. Use it to parse emails, reports, or any free-form document into a schema you define.Key config fields: input, schema (JSON Schema), model.