Architecture
Frona AI is a full-stack application with a Rust backend and a Next.js frontend, backed by an embedded SurrealDB database.
Engine (Rust backend)
Section titled “Engine (Rust backend)”The engine is the core of the platform. It handles:
- HTTP API: built on Axum, serves REST endpoints and SSE streams
- Agent Orchestration: loads agent configs, manages tool loops, coordinates delegation
- Tool Execution: runs tools server-side, manages browser sessions, sandboxes CLI commands
- Authentication: JWT-based auth with cookie and header support, optional SSO
- Database: SurrealDB embedded with RocksDB storage
- Scheduling: background task runner for cron jobs, compaction, and maintenance
- Inference: multi-provider LLM abstraction with fallback support (via rig-core)
Key modules
Section titled “Key modules”| Module | Purpose |
|---|---|
api/ | HTTP routes, middleware, request/response types |
agent/ | Agent models, service, tasks, skills |
chat/ | Chat sessions, messages, streaming |
tool/ | Tool implementations and registry |
inference/ | LLM provider abstraction, tool loop |
memory/ | Fact storage, compaction, insights |
auth/ | Authentication, JWT, SSO |
scheduler/ | Background task execution |
space/ | Workspace and space management |
Frontend (Next.js)
Section titled “Frontend (Next.js)”The frontend is a Next.js application using the App Router with static export. It provides:
- Chat interface with real-time streaming
- Agent management UI
- Workspace and space navigation
- File attachments and tool result rendering
- Authentication flows (login, register, SSO)
Key libraries
Section titled “Key libraries”- React 19 with TypeScript
- Tailwind CSS for styling
- React Markdown for rendering agent responses
- SSE client for real-time message streaming
Database (SurrealDB)
Section titled “Database (SurrealDB)”SurrealDB runs embedded inside the engine process. It uses RocksDB as the storage backend, so there’s no separate database server to manage.
Tables
Section titled “Tables”| Table | Stores |
|---|---|
user | User accounts |
agent | Agent definitions |
chat | Chat sessions |
message | Chat messages |
task | Tasks (direct, delegated, scheduled) |
space | Workspaces and spaces |
credential | Stored credentials |
memory / fact | Memory facts and insights |
prompt | Custom prompt overrides |
contact | Contacts |
call | Voice call records |
Data flow
Section titled “Data flow”A typical request flows like this:
- User sends a message through the frontend
- Frontend makes a POST to the engine API
- Engine loads the agent config, conversation history, and memory
- Engine sends the prompt to the configured LLM provider
- LLM response streams back; if it includes tool calls, the engine executes them
- Tool results feed back into the LLM for the next iteration
- Final response tokens stream to the frontend via SSE
- Messages are persisted to SurrealDB
Configuration system
Section titled “Configuration system”The engine loads configuration from a YAML file and environment variable overrides. Environment variables use the FRONA_ prefix with underscore-separated paths (e.g., FRONA_SERVER_PORT maps to server.port).
Prompts are loaded from .md files in the config directory, not hardcoded. This makes it easy to customize agent behavior without modifying code.