Creating Signals

Signals are events that trigger jobs in Atlas. They’re the starting point for automation - whether triggered manually, by webhooks, on schedules, or by system events. This guide covers everything about creating effective signals.

Signal Anatomy

Every signal has three parts:
signals:
  my-signal:                    # Unique identifier
    provider: "cli"            # How it's triggered
    description: "What it does" # Human-readable description
    config:                    # Provider-specific configuration
      # Optional configuration
    schema:                    # Optional data validation
      # JSON Schema for payload

CLI Signals

The most common signal type for manual triggers:

Basic CLI Signal

signals:
  analyze:
    provider: "cli"
    description: "Analyze data on demand"
Trigger it:
atlas signal trigger analyze

With Data Schema

Define expected data structure:
signals:
  process-file:
    provider: "cli"
    description: "Process a specific file"
    schema:
      type: "object"
      properties:
        filename:
          type: "string"
          pattern: "^[\\w\\-. ]+$"  # Safe filename pattern
        format:
          type: "string"
          enum: ["csv", "json", "xml"]
        options:
          type: "object"
          properties:
            skip_header:
              type: "boolean"
            encoding:
              type: "string"
              default: "utf-8"
      required: ["filename", "format"]
Trigger with data:
atlas signal trigger process-file --data '{
  "filename": "sales_data.csv",
  "format": "csv",
  "options": {
    "skip_header": true
  }
}'

Webhook Signals

Receive triggers from external systems:

Basic Webhook

signals:
  external-trigger:
    provider: "webhook"
    description: "Triggered by external system"
    config:
      path: "/webhooks/external"
      method: "POST"  # POST, GET, PUT, etc.
This creates an endpoint at:
http://localhost:8080/webhooks/external

Authenticated Webhook

Add security to your webhooks:
signals:
  github-webhook:
    provider: "webhook"
    description: "GitHub repository events"
    config:
      path: "/webhooks/github"
      method: "POST"
      auth:
        type: "secret"
        header: "X-Hub-Signature-256"
        secret_env: "GITHUB_WEBHOOK_SECRET"

Webhook with Validation

Validate incoming data:
signals:
  order-webhook:
    provider: "webhook"
    description: "New order notifications"
    config:
      path: "/webhooks/orders"
      method: "POST"
      headers:
        required:
          - "X-API-Key"
          - "Content-Type"
    schema:
      type: "object"
      properties:
        order_id:
          type: "string"
        customer:
          type: "object"
          properties:
            id: { type: "string" }
            email: { type: "string", format: "email" }
        items:
          type: "array"
          minItems: 1
        total:
          type: "number"
          minimum: 0
      required: ["order_id", "customer", "items", "total"]

Scheduled Signals

Run jobs on a schedule:

Cron-based Schedules

signals:
  # Every day at 9 AM
  daily-report:
    provider: "schedule"
    description: "Generate daily reports"
    config:
      schedule: "0 9 * * *"
      timezone: "America/New_York"
  
  # Every Monday at 8 AM
  weekly-summary:
    provider: "schedule"
    description: "Weekly team summary"
    config:
      schedule: "0 8 * * 1"
      timezone: "UTC"
  
  # Every hour on the hour
  hourly-check:
    provider: "schedule"
    description: "Hourly system check"
    config:
      schedule: "0 * * * *"
  
  # Complex schedule - weekdays at 6 PM
  weekday-digest:
    provider: "schedule"
    description: "Weekday end-of-day digest"
    config:
      schedule: "0 18 * * 1-5"
      timezone: "America/Los_Angeles"

Schedule Patterns

Common cron patterns:
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of week (0 - 6)
# │ │ │ │ │
# * * * * *

"0 0 * * *"      # Daily at midnight
"0 */4 * * *"    # Every 4 hours
"0 9-17 * * *"   # Every hour 9 AM - 5 PM
"*/15 * * * *"   # Every 15 minutes
"0 0 1 * *"      # First day of month
"0 0 * * 0"      # Every Sunday

Scheduled Signal with Data

Include data with scheduled triggers:
signals:
  scheduled-backup:
    provider: "schedule"
    description: "Scheduled backup task"
    config:
      schedule: "0 2 * * *"  # 2 AM daily
      timezone: "UTC"
      data:
        backup_type: "incremental"
        retention_days: 30
        notify: ["ops@example.com"]

File System Signals

React to file system changes:

Basic File Watching

signals:
  config-changed:
    provider: "file"
    description: "Configuration file updated"
    config:
      path: "./config"
      patterns: ["*.yml", "*.yaml"]
      events: ["create", "modify", "delete"]

Advanced File Monitoring

signals:
  data-uploaded:
    provider: "file"
    description: "New data file uploaded"
    config:
      path: "./uploads"
      patterns: 
        - "*.csv"
        - "*.xlsx"
        - "data_*.json"
      events: ["create"]
      recursive: true  # Watch subdirectories
      ignore_patterns:
        - "*.tmp"
        - "~*"  # Temporary files
      debounce: "5s"  # Wait for file to stabilize

System Signals

Internal Atlas events:

Session Events

signals:
  session-completed:
    provider: "system"
    description: "Any session completes"
    config:
      event: "session.complete"
  
  session-failed:
    provider: "system"
    description: "Session fails"
    config:
      event: "session.failed"
      filters:
        job_pattern: "critical-*"  # Only critical jobs

Memory Events

signals:
  memory-threshold:
    provider: "system"
    description: "Memory usage high"
    config:
      event: "memory.threshold"
      threshold: 0.8  # 80% of limit

Advanced Signal Features

Signal Composition

Combine multiple trigger conditions:
signals:
  multi-trigger:
    provider: "composite"
    description: "Multiple trigger conditions"
    config:
      mode: "any"  # or "all"
      signals:
        - scheduled-check
        - manual-override
        - alert-received

Signal Transformation

Transform signal data before job execution:
signals:
  normalized-webhook:
    provider: "webhook"
    description: "Webhook with data transformation"
    config:
      path: "/webhooks/normalized"
    transform:
      # JSONPath transformations
      order_id: "$.data.order.id"
      customer_email: "$.data.customer.email"
      items: "$.data.order.line_items[*].sku"
      total: "$.data.order.grand_total"

Conditional Signals

Signals that filter their own triggers:
signals:
  business-hours-only:
    provider: "webhook"
    description: "Only during business hours"
    config:
      path: "/webhooks/business"
    conditions:
      - type: "time_window"
        start: "09:00"
        end: "17:00"
        timezone: "America/New_York"
        days: ["Mon", "Tue", "Wed", "Thu", "Fri"]

Rate-Limited Signals

Prevent signal flooding:
signals:
  rate-limited:
    provider: "webhook"
    description: "Rate-limited webhook"
    config:
      path: "/webhooks/limited"
      rate_limit:
        max_per_minute: 10
        max_per_hour: 100
        burst_size: 20

Signal Patterns

1. Event-Driven Pattern

React to external events:
signals:
  # Customer action
  customer-signup:
    provider: "webhook"
    description: "New customer signup"
    config:
      path: "/events/signup"
  
  # Order lifecycle
  order-placed:
    provider: "webhook"
    description: "New order placed"
    config:
      path: "/events/order-placed"
  
  order-shipped:
    provider: "webhook"
    description: "Order shipped"
    config:
      path: "/events/order-shipped"

2. Monitoring Pattern

System monitoring signals:
signals:
  # Health checks
  health-check:
    provider: "schedule"
    description: "Regular health check"
    config:
      schedule: "*/5 * * * *"  # Every 5 minutes
  
  # Metrics collection
  collect-metrics:
    provider: "schedule"
    description: "Collect system metrics"
    config:
      schedule: "* * * * *"  # Every minute
      data:
        metrics: ["cpu", "memory", "disk", "network"]

3. Batch Processing Pattern

Process data in batches:
signals:
  # Hourly batch
  hourly-batch:
    provider: "schedule"
    description: "Process hourly batch"
    config:
      schedule: "0 * * * *"
      data:
        batch_type: "incremental"
        source: "queue"
  
  # Nightly batch
  nightly-batch:
    provider: "schedule"
    description: "Nightly full processing"
    config:
      schedule: "0 2 * * *"
      data:
        batch_type: "full"
        cleanup: true

4. Manual Override Pattern

Allow manual intervention:
signals:
  # Automated trigger
  auto-deploy:
    provider: "webhook"
    description: "Automated deployment"
    config:
      path: "/deploy/auto"
  
  # Manual override
  manual-deploy:
    provider: "cli"
    description: "Manual deployment override"
    schema:
      type: "object"
      properties:
        environment:
          type: "string"
          enum: ["dev", "staging", "prod"]
        version:
          type: "string"
        skip_tests:
          type: "boolean"
          default: false
      required: ["environment", "version"]

Testing Signals

Dry Run Testing

Test without executing jobs:
# Test signal trigger
atlas signal trigger my-signal --dry-run

# Test with data
atlas signal trigger my-signal --data '{"test": true}' --dry-run

Signal Validation

Validate signal configuration:
# Validate all signals
atlas signal validate

# Validate specific signal
atlas signal validate my-signal

Webhook Testing

Test webhooks locally:
# Send test webhook
curl -X POST http://localhost:8080/webhooks/my-webhook \
  -H "Content-Type: application/json" \
  -d '{"test": "data"}'

# Test with authentication
curl -X POST http://localhost:8080/webhooks/secure \
  -H "X-API-Key: my-secret-key" \
  -H "Content-Type: application/json" \
  -d '{"test": "data"}'

Best Practices

1. Clear Naming

Use descriptive, action-oriented names:
# Good
signals:
  process-customer-order:
  generate-monthly-report:
  sync-inventory-data:

# Avoid
signals:
  signal1:
  webhook:
  thing:

2. Comprehensive Descriptions

Help users understand signals:
signals:
  deploy-preview:
    provider: "webhook"
    description: |
      Triggered by PR comments containing "/deploy-preview".
      Creates a preview deployment for the PR branch.

3. Schema Validation

Always validate input data:
schema:
  type: "object"
  properties:
    # Be specific about types
    user_id:
      type: "string"
      pattern: "^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$"
    
    # Use enums for fixed values
    action:
      type: "string"
      enum: ["create", "update", "delete"]
    
    # Set boundaries
    priority:
      type: "integer"
      minimum: 1
      maximum: 5

4. Security First

Secure your signals:
# Use authentication
config:
  auth:
    type: "bearer"
    token_env: "WEBHOOK_TOKEN"

# Validate sources
config:
  allowed_ips: ["192.168.1.0/24"]
  
# Rate limit
config:
  rate_limit:
    max_per_minute: 100

5. Error Handling

Plan for failure:
signals:
  resilient-webhook:
    provider: "webhook"
    config:
      path: "/webhooks/resilient"
      timeout: "30s"
      retry_policy:
        max_attempts: 3
        backoff: "exponential"
      error_response:
        status: 503
        body: "Service temporarily unavailable"

Troubleshooting

Signal Not Triggering

  1. Check signal exists: atlas signal list
  2. Verify provider configuration
  3. Check logs: atlas logs
  4. Test with dry run

Webhook Not Receiving

  1. Verify daemon is running: atlas daemon status
  2. Check port availability
  3. Test with curl
  4. Check firewall rules

Schedule Not Running

  1. Verify cron syntax
  2. Check timezone settings
  3. Ensure daemon is running continuously
  4. Check system time

Next Steps