Integrating Services

This guide is for developers building HTTP services that Dispatched calls as workflow steps. It covers the request headers your service receives, the response headers you can send back, and best practices for reliability.

Request headers from Dispatched

Every HTTP call from a workflow step includes these headers:

Header Example Description
Dispatched-Run run_WxEu4lDvF9 The run ID. Use for logging and correlation.
Dispatched-Step charge-payment The step ID within the workflow.
Dispatched-Attempt 1 Attempt number (starts at 1, increments on retry).
Idempotency-Key run_WxEu4l/charge-payment Unique key for this step execution. Use to prevent duplicate processing.

Using the idempotency key

The Idempotency-Key header ensures that retried requests don’t cause duplicate side effects. Your service should:

  1. On first request: process normally, store the result keyed by the idempotency key
  2. On subsequent requests with the same key: return the stored result without reprocessing
# Example: idempotent payment processing
@app.post("/charge")
def charge(request):
    idem_key = request.headers.get("Idempotency-Key")
    
    # Check if we already processed this
    existing = db.get(f"idem:{idem_key}")
    if existing:
        return existing  # Return stored result, don't charge again
    
    # Process the charge
    result = stripe.charges.create(...)
    
    # Store result for idempotency
    db.set(f"idem:{idem_key}", result, ttl=86400)
    return result

Using the attempt number

Dispatched-Attempt tells your service how many times this step has been tried. You can use it to:

  • Log retry context: “Processing charge (attempt 3 of 3)”
  • Adjust behavior on retries (e.g., skip non-critical side effects)
  • Set different timeouts for retries

Response status codes

Dispatched interprets HTTP status codes to decide what happens next:

Status Dispatched behavior
200-299 Success. Step completes, response body stored in context for downstream steps.
400, 401, 403, 404, 422 Permanent failure. Not retried. Step fails immediately.
408 Timeout. Retried if retry policy is configured.
429 Rate limited. Retried with backoff. Respect Retry-After if your service sends it.
500, 502, 503, 504 Transient failure. Retried according to the step’s retry policy.

Best practice: return the right status code

Don’t return 500 for validation errors — use 422. This prevents unnecessary retries:

# Good: 422 for bad input (won't be retried)
if not valid_input(data):
    return {"error": "Invalid amount"}, 422

# Good: 503 for temporary unavailability (will be retried)
if database_overloaded():
    return {"error": "Service temporarily unavailable"}, 503

Response headers your service can send

Directives

Your service can send Dispatched-* response headers to control the workflow engine:

Header Example Effect
Dispatched-Spawn fulfill-order; input=eyJ...; mode=detach Start a child workflow. input is base64-encoded JSON.
Dispatched-Signal run=run_abc; name=payment-ready; data=eyJ... Send a signal to a waiting step in another run.
Dispatched-Cancel run_xyz Cancel another run.
Dispatched-Delay 30s Tell the engine to wait before proceeding to the next step.
Dispatched-Log Processed 42 items Add a log message to the run’s event stream.

Spawning a child workflow

import base64, json

@app.post("/process-order")
def process_order(request):
    order = process(request.json)
    
    # Spawn a fulfillment workflow as a child
    child_input = base64.b64encode(json.dumps({"order_id": order["id"]}).encode()).decode()
    
    return order, 200, {
        "Dispatched-Spawn": f"fulfill-order; input={child_input}; mode=detach"
    }

Sending a signal

@app.post("/webhook/payment-confirmed")
def payment_webhook(request):
    run_id = request.json["metadata"]["run_id"]
    
    return {"ok": True}, 200, {
        "Dispatched-Signal": f"run={run_id}; name=payment-confirmed; data=eyJ..."
    }

Retry-After

If your service returns 429 or 503, include a Retry-After header to suggest when the engine should retry:

HTTP/1.1 429 Too Many Requests
Retry-After: 60

Response body

The response body is parsed as JSON (if the content type is application/json) and stored in the run context. Downstream steps can access it via expressions:

{{ steps.charge-payment.response.body.transaction_id }}
{{ steps.charge-payment.response.status }}
{{ steps.charge-payment.response.headers.x-request-id }}

Keep responses lean

Only return the data that downstream steps need. Large response bodies are encrypted and stored as events — keep them focused.

# Good: return only what the workflow needs
return {"transaction_id": "txn_123", "status": "captured", "amount": 4999}

# Bad: return the entire Stripe charge object (100+ fields)
return stripe_charge.to_dict()

Compensation (saga rollback)

If your step has a compensate config, Dispatched will call your rollback endpoint when a later step fails. The compensation request receives the same Idempotency-Key with /compensate appended:

Idempotency-Key: run_WxEu4l/charge-payment/compensate

Design your compensation endpoint to be idempotent — it may be called more than once:

@app.post("/refund")
def refund(request):
    idem_key = request.headers.get("Idempotency-Key")
    
    if already_refunded(idem_key):
        return {"status": "already_refunded"}
    
    result = stripe.refunds.create(charge=request.json["charge_id"])
    mark_refunded(idem_key)
    return result

Timeout handling

Steps have a configurable timeout (default 30 seconds). If your service takes longer:

  • Return 202 Accepted immediately with a reference ID
  • Use a signal to notify the workflow when processing is complete
@app.post("/generate-report")
def generate_report(request):
    run_id = request.headers.get("Dispatched-Run")
    job_id = enqueue_report_generation(request.json, run_id)
    return {"job_id": job_id, "status": "processing"}, 202

Then configure the step with a wait:

"generate-report": {
  "request": {"method": "POST", "url": "https://api.example.com/generate-report"},
  "wait": {"signal": "report-ready", "timeout": "10m"}
}

And send the signal when the job completes:

# Background worker
def on_report_complete(job):
    requests.post(
        f"https://dispatched.work/api/runs/{job.run_id}/signal/report-ready",
        headers={"Dispatched-Session": session_token},
        json={"report_url": job.result_url}
    )

Security considerations

  • Verify the caller. Check Dispatched-Run and Dispatched-Step headers to confirm the request is from a legitimate workflow. Consider adding a shared secret or HMAC signature.
  • Don’t trust request bodies blindly. The body comes from the workflow definition’s expression evaluation — validate it like any other API input.
  • Log the run ID. Include Dispatched-Run in your logs for tracing across services.
  • Redacted headers. Dispatched automatically redacts Authorization and X-Api-Key headers from event data, so your API keys won’t appear in the run’s event stream.

Checklist for a well-integrated service

  • [ ] Handle Idempotency-Key to prevent duplicate processing
  • [ ] Return appropriate status codes (422 for bad input, 503 for transient issues)
  • [ ] Keep response bodies focused on what downstream steps need
  • [ ] Make compensation endpoints idempotent
  • [ ] Log Dispatched-Run and Dispatched-Step for correlation
  • [ ] For long operations, return 202 and use signals
  • [ ] Validate incoming request bodies