Executions

An execution is a single fire of a cue. Tracks delivery, retries, and outcomes.

What is an execution?

Every time a cue fires, CueAPI creates an execution — a record of that specific delivery attempt. Executions track:

  • When it was scheduled
  • Whether delivery succeeded
  • How many attempts were made
  • The handler's reported outcome

Execution lifecycle

pending → delivering → success
                     → retrying → retry_ready → delivering → success
                                                            → failed (max attempts)
StatusMeaning
pendingCreated, waiting for delivery
deliveringCurrently being delivered (webhook POST or worker claim)
successWebhook returned 2xx, or worker reported success
retryingFailed, scheduled for retry
retry_readyRetry is due, waiting for worker to pick up
failedAll retry attempts exhausted

Retry logic

When a webhook delivery fails (non-2xx response or timeout), CueAPI retries with exponential backoff:

AttemptDelay
1st retry5 minutes
2nd retry10 minutes
3rd retry20 minutes

After 3 failed attempts (4 total including the original), the execution is marked failed.

Note

Retry configuration is per-cue. The default is 3 retries with 5-minute backoff.

Viewing executions

Executions are returned when you fetch a cue's details:

bash
curl https://api.cueapi.ai/v1/cues/{cue_id} \
  -H "Authorization: Bearer cue_sk_..."

The response includes the last 10 executions:

json
{
  "id": "cue_abc123def456",
  "name": "daily-sync",
  "executions": [
    {
      "id": "exec-uuid-here",
      "status": "success",
      "scheduled_for": "2026-03-13T09:00:00Z",
      "attempts": 1,
      "http_status": 200,
      "outcome": {
        "success": true,
        "result": "Synced 142 records"
      }
    }
  ]
}

Deduplication

Executions are deduplicated on (cue_id, scheduled_for). If the poller processes the same cue twice for the same scheduled time, only one execution is created.

Stale execution recovery

If an execution gets stuck in delivering for more than 5 minutes (e.g., worker crashed), the poller automatically recovers it:

  • If retries remain → moves to retrying
  • If retries exhausted → marks failed