Requirements
Hardware
Section titled “Hardware”Minimum requirements for a single-server deployment:
| Resource | Minimum | Recommended |
|---|---|---|
| CPU | 2 cores | 4+ cores |
| RAM | 4 GB | 8+ GB |
| Disk | 20 GB | 50+ GB SSD |
| Network | Broadband | Low-latency connection |
Browserless (headless Chrome) is the most resource-intensive component. Each concurrent browser session uses roughly 200-500 MB of RAM. The default configuration allows 10 concurrent sessions.
Software
Section titled “Software”- Docker 24 or later
- Docker Compose v2
- Linux (recommended for production. Landlock sandboxing only works on Linux)
- macOS works for development but CLI sandboxing is limited
Network
Section titled “Network”The engine needs outbound access to:
- Your LLM provider’s API (Anthropic, OpenAI, etc.)
- The internet (for web search and browsing tools)
- Twilio API (if using voice features)
Inbound, you need to expose:
- Port 3000 (frontend) or your reverse proxy port
- Port 3001 (backend API). Or proxy through the same domain
- A public URL if using Twilio voice callbacks or ngrok
LLM provider
Section titled “LLM provider”You need an API key from at least one supported LLM provider:
- Anthropic (Claude). Recommended
- OpenAI (GPT models)
- Any OpenAI-compatible API endpoint
The API key is set via the ANTHROPIC_API_KEY or equivalent environment variable. Model groups in the configuration map to specific models and providers.
Optional services
Section titled “Optional services”- Twilio account: required only for voice call features
- ngrok: useful for exposing your local instance to the internet (e.g., for Twilio callbacks during development)