This guide teaches you how to build powerful automated workflows that chain AI operations together. Learn to create pipelines for content generation, data processing, and complex multi-step tasks.
Understanding Workflows
What is a Workflow?
A workflow is a sequence of automated steps that process data:
Input → Step 1 → Step 2 → Step 3 → Output
Each step can:
- Call AI models
- Generate content
- Transform data
- Make API calls
- Apply conditions
When to Use Workflows
| Use Case | Example |
|---|---|
| Batch Processing | Generate descriptions for 100 products |
| Multi-Step Tasks | Research → Outline → Write → Edit |
| Scheduled Jobs | Daily report generation |
| API Integration | Trigger on webhook, return results |
| Complex Pipelines | Multi-model content creation |
Creating Your First Workflow
Step 1: Plan Your Workflow
Before building, map out:
- Input: What data starts the workflow?
- Steps: What operations are needed?
- Dependencies: Which steps depend on others?
- Output: What's the final result?
Example Planning:
Goal: Generate blog post from topic
Input: topic (string), tone (string)
Steps:
1. Generate outline from topic
2. Write introduction from outline
3. Write body sections from outline
4. Write conclusion
5. Combine all sections
Output: Complete blog post
Step 2: Create the Workflow
- Navigate to Workflows in the sidebar
- Click New Workflow
- Enter name and description:
- Name: "Blog Post Generator"
- Description: "Creates complete blog posts from topics"
Step 3: Define Inputs
Configure workflow inputs:
inputs:
- name: topic
type: string
required: true
description: "The blog post topic"
- name: tone
type: string
required: false
default: "professional"
description: "Writing tone (professional, casual, technical)"
- name: wordCount
type: number
required: false
default: 1000
description: "Target word count"
Step 4: Add Steps
Step 1: Generate Outline
step: generate_outline
type: ai_chat
config:
model: gemini-2.0-flash
temperature: 0.5
prompt: |
Create a detailed outline for a blog post about: {{input.topic}}
Tone: {{input.tone}}
Target length: {{input.wordCount}} words
Include:
- Compelling title
- Introduction hook
- 3-5 main sections with key points
- Conclusion summary
Format as structured markdown.
Step 2: Write Introduction
step: write_intro
type: ai_chat
depends_on: generate_outline
config:
model: gemini-2.0-flash
temperature: 0.7
prompt: |
Based on this outline:
{{steps.generate_outline.output}}
Write an engaging introduction (150-200 words) that:
- Hooks the reader immediately
- Introduces the topic
- Previews what they'll learn
Tone: {{input.tone}}
Step 3: Write Body
step: write_body
type: ai_chat
depends_on: generate_outline
config:
model: gemini-1.5-pro
temperature: 0.6
prompt: |
Based on this outline:
{{steps.generate_outline.output}}
Write the main body sections ({{input.wordCount * 0.7}} words).
For each section:
- Use clear subheadings
- Include practical examples
- Add relevant details
Tone: {{input.tone}}
Step 4: Write Conclusion
step: write_conclusion
type: ai_chat
depends_on: [write_intro, write_body]
config:
model: gemini-2.0-flash
temperature: 0.6
prompt: |
Based on the introduction:
{{steps.write_intro.output}}
And body:
{{steps.write_body.output}}
Write a strong conclusion (100-150 words) that:
- Summarizes key points
- Provides a call to action
- Leaves a lasting impression
Step 5: Combine
step: combine
type: transform
depends_on: [write_intro, write_body, write_conclusion]
config:
template: |
{{steps.write_intro.output}}
{{steps.write_body.output}}
{{steps.write_conclusion.output}}
Step 5: Test and Save
- Click Test Workflow
- Enter sample inputs:
{ "topic": "Remote Work Best Practices", "tone": "professional", "wordCount": 1000 } - Review the output
- Click Save
Workflow Patterns
Linear Pipeline
Steps execute in sequence:
[Input] → [Step 1] → [Step 2] → [Step 3] → [Output]
Use for: Simple transformations, content generation
Parallel Processing
Independent steps run simultaneously:
┌→ [Step 2a] ─┐
[Step 1] ├→ [Step 2b] ─┼→ [Step 3]
└→ [Step 2c] ─┘
Use for: Multi-format output, independent generations
Conditional Branching
Different paths based on conditions:
┌→ [Path A] ─┐
[Step 1] ─┤ ├→ [Output]
└→ [Path B] ─┘
Use for: Different handling based on input/results
Loop Processing
Iterate over items:
[Input List] → [Loop] → [Process Item] → [Collect Results]
Use for: Batch processing, list transformations
Step Types
AI Chat Step
Send a message to an AI model:
type: ai_chat
config:
model: gemini-2.0-flash # or gemini-1.5-pro
temperature: 0.7
systemPrompt: "You are a helpful assistant"
prompt: "User message here"
Image Generation Step
Generate an image:
type: image_generation
config:
model: flux-pro-1.1
prompt: "Image description"
aspectRatio: "16:9"
quality: "high"
Transform Step
Modify data:
type: transform
config:
template: "Combined: {{step1.output}} + {{step2.output}}"
# Or use JavaScript expression
expression: "steps.data.items.map(i => i.name).join(', ')"
HTTP Request Step
Call external APIs:
type: http_request
config:
method: POST
url: "https://api.example.com/webhook"
headers:
Authorization: "Bearer {{secrets.API_KEY}}"
Content-Type: "application/json"
body:
data: "{{previousStep.output}}"
Condition Step
Branch based on logic:
type: condition
config:
expression: "input.priority === 'high'"
trueBranch: high_priority_step
falseBranch: normal_step
Loop Step
Process arrays:
type: loop
config:
items: "{{input.products}}"
itemVariable: "product"
step:
type: ai_chat
prompt: "Describe {{product.name}}"
Variables and Data Flow
Accessing Variables
| Syntax | Description | Example |
|---|---|---|
{{input.name}} | Workflow input | {{input.topic}} |
{{steps.stepName.output}} | Step output | {{steps.outline.output}} |
{{secrets.KEY}} | Secret value | {{secrets.API_KEY}} |
{{loop.item}} | Current loop item | {{loop.item.name}} |
{{loop.index}} | Current loop index | {{loop.index}} |
Transform Expressions
Use JavaScript expressions for complex transformations:
# String manipulation
expression: "input.name.toUpperCase()"
# Array operations
expression: "steps.data.output.items.filter(i => i.active)"
# Conditional
expression: "input.premium ? 'VIP' : 'Standard'"
# JSON parsing
expression: "JSON.parse(steps.api.output).results"
Data Passing Best Practices
- Keep data minimal - Only pass what's needed
- Use clear names - Descriptive variable names
- Handle null - Check for missing data
- Format consistently - Standardize data formats
Workflow Examples
Product Description Generator
name: Product Description Generator
description: Generates product descriptions for e-commerce
inputs:
- name: products
type: array
required: true
description: Array of product objects
steps:
- id: generate_descriptions
type: loop
config:
items: "{{input.products}}"
itemVariable: product
step:
type: ai_chat
config:
model: gemini-2.0-flash
temperature: 0.7
prompt: |
Write a compelling product description for:
Name: {{product.name}}
Category: {{product.category}}
Features: {{product.features}}
Price: ${{product.price}}
Include:
- Attention-grabbing headline
- Key benefits (not just features)
- Call to action
- id: format_output
type: transform
depends_on: generate_descriptions
config:
expression: |
steps.generate_descriptions.output.map((desc, i) => ({
productId: input.products[i].id,
description: desc
}))
Social Media Content Pipeline
name: Social Media Pipeline
description: Creates multi-platform content from a single brief
inputs:
- name: brief
type: string
required: true
- name: brand_voice
type: string
default: "friendly and professional"
steps:
- id: core_message
type: ai_chat
config:
prompt: |
Extract the core marketing message from this brief:
{{input.brief}}
Brand voice: {{input.brand_voice}}
Return a 1-2 sentence core message.
- id: twitter_post
type: ai_chat
depends_on: core_message
config:
prompt: |
Create a Twitter post (max 280 chars) for:
{{steps.core_message.output}}
Include relevant hashtags.
- id: linkedin_post
type: ai_chat
depends_on: core_message
config:
prompt: |
Create a LinkedIn post (300-500 words) for:
{{steps.core_message.output}}
Professional tone, include call to action.
- id: instagram_caption
type: ai_chat
depends_on: core_message
config:
prompt: |
Create an Instagram caption for:
{{steps.core_message.output}}
Engaging, include emojis, relevant hashtags.
- id: compile_output
type: transform
depends_on: [twitter_post, linkedin_post, instagram_caption]
config:
template: |
{
"twitter": "{{steps.twitter_post.output}}",
"linkedin": "{{steps.linkedin_post.output}}",
"instagram": "{{steps.instagram_caption.output}}"
}
Customer Support Auto-Response
name: Support Ticket Auto-Response
description: Automatically categorizes and responds to support tickets
inputs:
- name: ticket
type: object
required: true
# { subject, body, customerId }
steps:
- id: categorize
type: ai_chat
config:
temperature: 0.2
prompt: |
Categorize this support ticket:
Subject: {{input.ticket.subject}}
Body: {{input.ticket.body}}
Categories: billing, technical, feature_request, general
Return only the category name.
- id: check_category
type: condition
depends_on: categorize
config:
expression: "steps.categorize.output.trim() === 'billing'"
trueBranch: billing_response
falseBranch: general_response
- id: billing_response
type: ai_chat
config:
systemPrompt: "You are a billing support specialist"
prompt: |
Write a helpful response to this billing inquiry:
{{input.ticket.body}}
Be empathetic and solution-focused.
- id: general_response
type: ai_chat
config:
systemPrompt: "You are a friendly support agent"
prompt: |
Write a helpful response to:
{{input.ticket.body}}
Category: {{steps.categorize.output}}
Execution and Monitoring
Triggering Workflows
Via API:
curl -X POST https://www.girardai.com/api/v1/workflows/{id}/execute \
-H "Authorization: Bearer sk_live_xxx" \
-d '{"inputs": {"topic": "AI"}}'
Via Webhook:
POST https://www.girardai.com/api/v1/webhooks/{workflowId}
Via Dashboard:
- Open workflow
- Click "Run"
- Enter inputs
- Execute
Monitoring Runs
Check execution status:
- Go to workflow Runs tab
- See all executions
- Click run for details:
- Step-by-step progress
- Inputs/outputs
- Errors and logs
- Duration
Handling Errors
Build in error handling:
steps:
- id: risky_step
type: http_request
config:
url: "{{input.apiUrl}}"
onError:
retry:
attempts: 3
delay: 1000
fallback: error_handler_step
- id: error_handler_step
type: transform
config:
template: |
{
"success": false,
"error": "API call failed",
"fallbackData": "Default response"
}
Best Practices
Performance
- Parallelize independent steps - Run together when possible
- Use appropriate models - Flash for speed, Pro for quality
- Limit data passing - Only pass necessary data
- Cache when possible - Avoid redundant operations
Reliability
- Add error handling - Retry and fallback logic
- Validate inputs - Check data before processing
- Log important steps - Debugging and auditing
- Test thoroughly - Cover edge cases
Maintainability
- Use clear names - Descriptive step IDs
- Add comments - Document complex logic
- Modularize - Break into smaller workflows
- Version control - Track changes
Previous: MCP Integration | Next: Prompt Engineering