Agent Turn Scheduling

Schedule AI agent invocations with CueAPI. Full pattern with LLM integration.

What is an agent turn?

An "agent turn" is a scheduled invocation of an AI agent. CueAPI fires on time, your handler invokes the agent with a prompt and context, and the outcome is reported back.

CueAPI is the clock. The handler is the brain.

CueAPI doesn't know about LLMs, prompts, or agents. It fires on schedule and delivers a payload. Your handler does the rest.

The pattern

CueAPI fires on schedule
    → Worker receives execution
    → Handler reads instruction + context_ref
    → Handler fetches live context
    → Handler invokes LLM with instruction + context
    → Handler does something with the result
    → Worker auto-reports outcome to CueAPI

Agent turn payload

json
{
  "task": "draft-linkedin",
  "kind": "agent_turn",
  "agent": "socrates",
  "instruction": "Draft a LinkedIn post about today's top 3 AI developments",
  "context_ref": "daily-trends",
  "context_mode": "live"
}
FieldDescription
taskHandler name for worker routing
kind"agent_turn" signals this requires LLM invocation
agentWhich agent should handle this
instructionThe prompt or task description
context_refWhere to find live context
context_mode"live" = fetch fresh context at execution time

Full example: LinkedIn post drafting

1. Create the cue

bash
cueapi create \
  --name "linkedin-daily" \
  --cron "0 9 * * 1-5" \
  --transport worker \
  --payload '{
    "task": "draft-linkedin",
    "kind": "agent_turn",
    "agent": "socrates",
    "instruction": "Draft a LinkedIn post about the top 3 AI developments today",
    "context_ref": "daily-trends",
    "context_mode": "live"
  }'

2. Worker config

yaml
handlers:
  draft-linkedin:
    cmd: "python3 linkedin_draft.py"
    cwd: "/home/user/agents/content-pipeline"
    timeout: 300
    env:
      INSTRUCTION: "{{ payload.instruction }}"
      CONTEXT_REF: "{{ payload.context_ref }}"
      CONTEXT_MODE: "{{ payload.context_mode }}"
      AGENT: "{{ payload.agent }}"

3. Handler script

python
# linkedin_draft.py
import os, json
from anthropic import Anthropic
 
instruction = os.environ["INSTRUCTION"]
context_ref = os.environ["CONTEXT_REF"]
context_mode = os.environ["CONTEXT_MODE"]
 
# Fetch live context (handler's responsibility)
if context_mode == "live":
    with open(f"/data/context/{context_ref}.json") as f:
        context = json.load(f)
else:
    context = {}
 
# Invoke LLM (handler's responsibility)
client = Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-20250514",
    max_tokens=1000,
    messages=[{
        "role": "user",
        "content": f"{instruction}\n\nContext:\n{json.dumps(context, indent=2)}"
    }]
)
 
draft = response.content[0].text
print(f"Draft generated: {len(draft)} chars")
 
# Save for review
with open("/data/drafts/linkedin-today.md", "w") as f:
    f.write(draft)
 
print("Draft saved for review.")

4. What CueAPI sees

Execution: draft-linkedin
Scheduled: 2026-03-13 09:00:00 UTC
Status: success

Outcome:
  success: true
  result: "Draft generated: 847 chars\nDraft saved for review."

More examples

Daily standup summary

json
{
  "task": "standup-summary",
  "kind": "agent_turn",
  "agent": "hermes",
  "instruction": "Summarize yesterday's commits and open PRs into a standup update",
  "context_ref": "github-activity",
  "context_mode": "live"
}

Weekly report

json
{
  "task": "weekly-report",
  "kind": "agent_turn",
  "agent": "athena",
  "instruction": "Draft the weekly progress report from this week's completed tasks",
  "context_ref": "task-tracker",
  "context_mode": "live"
}

Scheduled code review

json
{
  "task": "code-review",
  "kind": "agent_turn",
  "agent": "socrates",
  "instruction": "Review open PRs and leave feedback on code quality and test coverage",
  "context_ref": "open-prs",
  "context_mode": "live"
}

Why this pattern works

Separation of concerns. CueAPI handles scheduling, delivery guarantees, retries, and outcome tracking. Your handler handles context resolution, LLM invocation, and business logic.

Observable. Every agent turn is an execution with a status, outcome, and result. If the agent fails, you see it. If it succeeds, you see what it produced.

Transport-agnostic. The same payload convention works with both webhook and worker transport.