ConversationAgent

The ConversationAgent is Atlas’s built-in system agent for creating interactive, context-aware conversations. It’s perfect for chatbots, interactive assistants, and any scenario requiring back-and-forth dialogue.

Quick Start

Add a conversation agent to your workspace:
agents:
  assistant:
    type: "system"
    agent: "conversation"
    config:
      model: "claude-3-5-sonnet-20241022"
      system_prompt: |
        You are a helpful AI assistant. Be friendly, clear, and concise.
        Help users with their questions and tasks.

Features

🧠 Advanced Reasoning

Enable step-by-step reasoning for complex problems:
config:
  use_reasoning: true
  max_reasoning_steps: 10

💾 Conversation Memory

Maintains context across messages:
config:
  memory_enabled: true
  max_conversation_length: 100

🛠️ Tool Integration

Give your conversation agent capabilities:
config:
  tools: ["web-search", "calculator", "file-system"]

📡 Streaming Responses

Real-time streaming for better UX:
config:
  streaming: true
  stream_delimiter: "\n"

Configuration Options

Basic Configuration

agents:
  chat-bot:
    type: "system"
    agent: "conversation"
    config:
      # Model selection
      model: "claude-3-5-sonnet-20241022"
      
      # Personality and behavior
      system_prompt: "Your agent's personality and instructions"
      
      # Temperature (0.0-1.0)
      temperature: 0.7
      
      # Max response length
      max_tokens: 2000

Advanced Configuration

config:
  # Reasoning capabilities
  use_reasoning: true
  max_reasoning_steps: 15
  
  # Memory settings
  memory_enabled: true
  max_conversation_length: 50
  context_window: 10
  
  # Tool configuration
  tools: ["web-search", "code-interpreter"]
  tool_choice: "auto"  # or "none", "required"
  
  # Streaming
  streaming: true
  stream_delimiter: "\n"
  
  # Safety
  content_filter: true
  max_retries: 3
  timeout: "120s"

Usage Patterns

1. Simple Chatbot

Basic conversational assistant:
agents:
  chatbot:
    type: "system"
    agent: "conversation"
    config:
      model: "claude-3-5-haiku-20241022"
      system_prompt: |
        You are a friendly chatbot. Answer questions,
        provide information, and engage in casual conversation.

signals:
  chat:
    provider: "cli"
    description: "Start a chat session"

jobs:
  chat-session:
    triggers:
      - signal: "chat"
    execution:
      agents:
        - id: "chatbot"
          input_source: "signal"

2. Technical Support Agent

Specialized support with tools:
agents:
  support-agent:
    type: "system"
    agent: "conversation"
    config:
      model: "claude-3-5-sonnet-20241022"
      system_prompt: |
        You are a technical support specialist for Atlas.
        Help users with:
        - Installation issues
        - Configuration problems
        - Debugging errors
        - Best practices
        
        Always be patient and provide step-by-step solutions.
      
      # Enable reasoning for complex issues
      use_reasoning: true
      
      # Give access to documentation
      tools: ["documentation-search", "code-examples"]

3. Interactive Tutorial Guide

Educational assistant with memory:
agents:
  tutor:
    type: "system"
    agent: "conversation"
    config:
      model: "claude-3-5-sonnet-20241022"
      system_prompt: |
        You are an Atlas tutorial guide. Lead users through
        interactive tutorials, remembering their progress.
        
        Track what they've learned and adapt your teaching.
      
      memory_enabled: true
      context_window: 20  # Remember more history
      
      # Structured responses
      response_format: |
        📚 **Lesson**: {topic}
        🎯 **Objective**: {goal}
        📝 **Instructions**: {steps}
        ✅ **Next**: {next_action}

4. Code Assistant with Reasoning

Advanced coding helper:
agents:
  code-assistant:
    type: "system" 
    agent: "conversation"
    config:
      model: "claude-3-5-sonnet-20241022"
      system_prompt: |
        You are an expert programming assistant.
        Help with coding tasks, debugging, and architecture.
        
        Always:
        - Think through problems step-by-step
        - Provide working code examples
        - Explain your reasoning
        - Suggest best practices
      
      use_reasoning: true
      max_reasoning_steps: 20
      
      tools: ["code-interpreter", "file-system", "web-search"]
      
      # Code-specific formatting
      code_block_style: "```language"
      syntax_highlighting: true

Working with Conversations

Starting a Conversation

Trigger with initial message:
atlas signal trigger chat --data '{
  "message": "Hello! I need help with my project",
  "conversation_id": "project-help-123"
}'

Continuing a Conversation

Include conversation ID to maintain context:
atlas signal trigger chat --data '{
  "message": "Can you explain that in more detail?",
  "conversation_id": "project-help-123"
}'

Conversation Management

The agent handles:
  • Message history tracking
  • Context preservation
  • Memory updates
  • Session continuity

Advanced Features

1. Reasoning Mode

When enabled, the agent thinks through problems:
config:
  use_reasoning: true
  reasoning_style: "analytical"  # or "creative", "systematic"
  show_reasoning: false  # Hide internal thoughts
Example Output with Reasoning:
🤔 Thinking through your question...

I need to:
1. Understand the requirements
2. Consider different approaches
3. Recommend the best solution

Based on my analysis, here's what I suggest...

2. Tool Usage

Extend capabilities with tools:
config:
  tools: 
    - "web-search"      # Search the internet
    - "calculator"      # Perform calculations
    - "code-runner"     # Execute code
    - "file-manager"    # Read/write files
    - "api-caller"      # Make API requests
Tool Execution Flow:
  1. Agent recognizes need for tool
  2. Prepares tool arguments
  3. Executes tool call
  4. Integrates results into response

3. Streaming Responses

Enable for real-time interaction:
config:
  streaming: true
  stream_chunk_size: 10  # Characters per chunk
  stream_delimiter: " "   # Word boundaries
Benefits:
  • Immediate feedback
  • Better perceived performance
  • Interruptible responses

4. Custom Prompting

Fine-tune behavior with detailed prompts:
config:
  system_prompt: |
    Core Identity: {agent_role}
    
    Communication Style:
    - {tone_guidelines}
    - {language_preferences}
    
    Capabilities:
    - {skill_list}
    
    Constraints:
    - {limitations}
    
    Response Format:
    - {structure_rules}

Integration Examples

With Web Interface

// Frontend WebSocket connection
const ws = new WebSocket('ws://localhost:8080/conversation');

ws.send(JSON.stringify({
  signal: 'chat',
  data: {
    message: userInput,
    conversation_id: sessionId
  }
}));

ws.onmessage = (event) => {
  const response = JSON.parse(event.data);
  displayMessage(response.content);
};

With Slack

signals:
  slack-message:
    provider: "webhook"
    config:
      path: "/slack/events"

agents:
  slack-bot:
    type: "system"
    agent: "conversation"
    config:
      model: "claude-3-5-haiku-20241022"
      system_prompt: "You are a helpful Slack bot..."

With Voice Interfaces

agents:
  voice-assistant:
    type: "system"
    agent: "conversation"
    config:
      # Optimize for voice
      system_prompt: |
        Provide concise, spoken-friendly responses.
        Avoid complex formatting or long lists.
      
      max_tokens: 150  # Shorter responses
      response_style: "conversational"

Best Practices

1. Clear System Prompts

Be specific about the agent’s role:
# Good
system_prompt: |
  You are a Python tutor specializing in beginners.
  Focus on simple explanations and practical examples.
  Always encourage and never assume prior knowledge.

# Too vague
system_prompt: "You are a helpful assistant."

2. Appropriate Model Selection

Match model to use case:
  • Haiku: Quick responses, simple queries
  • Sonnet: General purpose, balanced performance
  • Opus: Complex reasoning, critical accuracy

3. Memory Management

Configure memory based on needs:
# Short interactions
config:
  max_conversation_length: 10
  
# Long-term support
config:
  max_conversation_length: 100
  memory_cleanup_strategy: "sliding_window"

4. Error Handling

Plan for edge cases:
config:
  fallback_responses:
    timeout: "I'm taking too long to think. Let me try again."
    error: "I encountered an issue. Could you rephrase that?"
    no_context: "I don't have enough context. Can you provide more details?"

5. Testing Conversations

Test different scenarios:
# Test error handling
atlas signal trigger chat --data '{"message": null}'

# Test long input
atlas signal trigger chat --data '{"message": "... very long text ..."}'

# Test conversation recovery
atlas signal trigger chat --data '{"conversation_id": "old-session"}'

Troubleshooting

Common Issues

  1. No response: Check model API key and network
  2. Truncated responses: Increase max_tokens
  3. Lost context: Verify conversation_id is consistent
  4. Slow responses: Consider using a faster model
  5. Tool errors: Validate tool permissions and availability

Debug Mode

Enable detailed logging:
config:
  debug: true
  log_level: "verbose"
  trace_tool_calls: true

Next Steps