Skip to content

Architecture

Frona AI is a full-stack application with a Rust backend and a Next.js frontend, backed by an embedded SurrealDB database.

The engine is the core of the platform. It handles:

  • HTTP API: built on Axum, serves REST endpoints and SSE streams
  • Agent Orchestration: loads agent configs, manages tool loops, coordinates delegation
  • Tool Execution: runs tools server-side, manages browser sessions, sandboxes CLI commands
  • Authentication: JWT-based auth with cookie and header support, optional SSO
  • Database: SurrealDB embedded with RocksDB storage
  • Scheduling: background task runner for cron jobs, compaction, and maintenance
  • Inference: multi-provider LLM abstraction with fallback support (via rig-core)
ModulePurpose
api/HTTP routes, middleware, request/response types
agent/Agent models, service, tasks, skills
chat/Chat sessions, messages, streaming
tool/Tool implementations and registry
inference/LLM provider abstraction, tool loop
memory/Fact storage, compaction, insights
auth/Authentication, JWT, SSO
scheduler/Background task execution
space/Workspace and space management

The frontend is a Next.js application using the App Router with static export. It provides:

  • Chat interface with real-time streaming
  • Agent management UI
  • Workspace and space navigation
  • File attachments and tool result rendering
  • Authentication flows (login, register, SSO)
  • React 19 with TypeScript
  • Tailwind CSS for styling
  • React Markdown for rendering agent responses
  • SSE client for real-time message streaming

SurrealDB runs embedded inside the engine process. It uses RocksDB as the storage backend, so there’s no separate database server to manage.

TableStores
userUser accounts
agentAgent definitions
chatChat sessions
messageChat messages
taskTasks (direct, delegated, scheduled)
spaceWorkspaces and spaces
credentialStored credentials
memory / factMemory facts and insights
promptCustom prompt overrides
contactContacts
callVoice call records

A typical request flows like this:

  1. User sends a message through the frontend
  2. Frontend makes a POST to the engine API
  3. Engine loads the agent config, conversation history, and memory
  4. Engine sends the prompt to the configured LLM provider
  5. LLM response streams back; if it includes tool calls, the engine executes them
  6. Tool results feed back into the LLM for the next iteration
  7. Final response tokens stream to the frontend via SSE
  8. Messages are persisted to SurrealDB

The engine loads configuration from a YAML file and environment variable overrides. Environment variables use the FRONA_ prefix with underscore-separated paths (e.g., FRONA_SERVER_PORT maps to server.port).

Prompts are loaded from .md files in the config directory, not hardcoded. This makes it easy to customize agent behavior without modifying code.