Skip to content

Requirements

Minimum requirements for a single-server deployment:

ResourceMinimumRecommended
CPU2 cores4+ cores
RAM4 GB8+ GB
Disk20 GB50+ GB SSD
NetworkBroadbandLow-latency connection

Browserless (headless Chrome) is the most resource-intensive component. Each concurrent browser session uses roughly 200-500 MB of RAM. The default configuration allows 10 concurrent sessions.

  • Docker 24 or later
  • Docker Compose v2
  • Linux (recommended for production. Landlock sandboxing only works on Linux)
    • macOS works for development but CLI sandboxing is limited

The engine needs outbound access to:

  • Your LLM provider’s API (Anthropic, OpenAI, etc.)
  • The internet (for web search and browsing tools)
  • Twilio API (if using voice features)

Inbound, you need to expose:

  • Port 3000 (frontend) or your reverse proxy port
  • Port 3001 (backend API). Or proxy through the same domain
  • A public URL if using Twilio voice callbacks or ngrok

You need an API key from at least one supported LLM provider:

  • Anthropic (Claude). Recommended
  • OpenAI (GPT models)
  • Any OpenAI-compatible API endpoint

The API key is set via the ANTHROPIC_API_KEY or equivalent environment variable. Model groups in the configuration map to specific models and providers.

  • Twilio account: required only for voice call features
  • ngrok: useful for exposing your local instance to the internet (e.g., for Twilio callbacks during development)