Signals

Signals are events that trigger jobs in Atlas. They’re the starting point for any agent workflow - whether it’s a CLI command, a webhook, a scheduled task, or an internal system event.

What are Signals?

Think of signals as the β€œwhen” in β€œwhen this happens, do that”:
  • When a CLI command is run β†’ execute a job
  • When a webhook is received β†’ process the data
  • When it’s 2 AM β†’ run maintenance tasks
  • When a file changes β†’ analyze the updates

Signal Providers

Atlas supports several signal providers out of the box:

CLI Provider πŸ–₯️

The most common provider for interactive use:
signals:
  analyze:
    provider: "cli"
    description: "Analyze data on demand"
    
  chat:
    provider: "cli"
    description: "Start an interactive chat session"
Trigger CLI signals:
# Simple trigger
atlas signal trigger analyze

# With data
atlas signal trigger analyze --data '{"file": "data.csv"}'

# In interactive mode
/signal trigger chat --data {"message": "Hello!"}

Webhook Provider πŸ”—

Receive signals from external systems:
signals:
  github-push:
    provider: "webhook"
    description: "Triggered by GitHub push events"
    config:
      path: "/webhooks/github"
      method: "POST"
      auth:
        type: "secret"
        header: "X-Hub-Signature"
The Atlas daemon exposes webhooks at:
http://localhost:8080/webhooks/github

Schedule Provider ⏰

Run jobs on a schedule using cron syntax:
signals:
  daily-report:
    provider: "schedule"
    description: "Generate daily reports"
    config:
      schedule: "0 9 * * *"  # Every day at 9 AM
      timezone: "America/New_York"
  
  cleanup:
    provider: "schedule"
    description: "Clean up old data"
    config:
      schedule: "0 2 * * SUN"  # Sundays at 2 AM

System Provider πŸ”§

Internal Atlas events:
signals:
  session-complete:
    provider: "system"
    description: "Triggered when any session completes"
    
  memory-threshold:
    provider: "system"
    description: "Memory usage above threshold"

File Provider πŸ“

Watch for file system changes:
signals:
  config-changed:
    provider: "file"
    description: "Configuration file updated"
    config:
      path: "./config"
      patterns: ["*.yml", "*.yaml"]
      events: ["create", "modify", "delete"]

Signal Data

Signals can carry data (payload) that jobs can use:

CLI Data

atlas signal trigger analyze --data '{
  "file": "sales_data.csv",
  "period": "Q4",
  "format": "summary"
}'

Webhook Data

Webhook providers automatically include the request body:
{
  "repository": "tempestdx/atlas",
  "action": "push",
  "commits": [...]
}

Accessing Signal Data

In your job configuration:
jobs:
  process-data:
    triggers:
      - signal: "analyze"
    execution:
      agents:
        - id: "analyst"
          input_source: "signal"  # Agent receives signal data

Signal Conditions

Control when jobs run with conditions:
jobs:
  handle-pr:
    triggers:
      - signal: "github-webhook"
        condition:
          type: "json_logic"
          logic:
            "and": [
              {"==": [{"var": "action"}, "opened"]},
              {"in": ["bug", {"var": "labels"}]}
            ]
This job only runs when:
  • The action is β€œopened”
  • The PR has a β€œbug” label

Signal Schemas

Define expected data structure:
signals:
  create-task:
    provider: "cli"
    description: "Create a new task"
    schema:
      type: "object"
      properties:
        title:
          type: "string"
          minLength: 1
        priority:
          type: "string"
          enum: ["low", "medium", "high"]
        due_date:
          type: "string"
          format: "date"
      required: ["title", "priority"]
Atlas validates signal data against the schema before triggering jobs.

Advanced Signal Patterns

Signal Chaining

One job can trigger another signal:
jobs:
  analyze-and-report:
    triggers:
      - signal: "analyze-data"
    execution:
      agents:
        - id: "analyzer"
      on_success:
        trigger_signal: "send-report"
        with_data: 
          from: "previous_output"

Multiple Triggers

Jobs can respond to multiple signals:
jobs:
  emergency-response:
    triggers:
      - signal: "alert-high"
      - signal: "alert-critical"
      - signal: "manual-emergency"
    execution:
      agents:
        - id: "incident-handler"

Signal Transformation

Transform signal data before job execution:
signals:
  raw-webhook:
    provider: "webhook"
    transform:
      # Extract just what we need
      user_id: "$.data.user.id"
      action: "$.data.action"
      timestamp: "$.metadata.received_at"

Monitoring Signals

List Available Signals

# In current workspace
atlas signal list

# In specific workspace
atlas signal list --workspace my-workspace

View Signal History

# See recent signal triggers
atlas signal history

# Filter by signal name
atlas signal history --signal analyze-data

Debug Signal Processing

# Dry run - see what would happen
atlas signal trigger analyze --dry-run

# Verbose output
atlas signal trigger analyze --verbose

Best Practices

1. Descriptive Names

Use clear, action-oriented names:
  • βœ… analyze-customer-data
  • ❌ signal1

2. Document Purpose

Always include descriptions:
signals:
  deploy-preview:
    provider: "webhook"
    description: "Deploy PR preview when checks pass"

3. Validate Input

Use schemas for reliability:
schema:
  type: "object"
  properties:
    required_field:
      type: "string"
  required: ["required_field"]

4. Handle Errors

Plan for invalid data:
jobs:
  safe-processor:
    triggers:
      - signal: "process"
        condition:
          type: "json_logic"
          logic:
            "!=": [{"var": "data"}, null]

5. Security

For webhooks, always use authentication:
config:
  auth:
    type: "secret"
    secret_env: "WEBHOOK_SECRET"

Next Steps