What is Frona AI
Frona is a self-hosted AI agent platform. You create autonomous agents, give them tools, and talk to them through a chat interface. Agents act on their own. They browse the web, run code, develop applications, search the internet, make phone calls, delegate work to each other, and remember context across conversations. You give them a task and they figure out how to get it done.
You deploy Frona on your own infrastructure and keep full control of your data. The platform is built from the ground up with security in mind, and the engine is written in Rust. So it’s fast, lightweight, and runs everything in a single process.
Security First
Section titled “Security First”AI agents are powerful. They can execute code, browse websites, and access your data. No platform can make LLMs perfectly safe. They will make mistakes. The goal is to isolate those mistakes and reduce the blast radius when they happen. Frona follows security best practices to contain what agents can access and limit the damage when something goes wrong.
-
Sandboxed Execution: When agents run shell commands, they execute inside a sandbox. The sandbox restricts filesystem access to the agent’s workspace directory, controls network access per agent, and enforces execution timeouts. An agent can’t read files outside its workspace or make network calls you didn’t allow.
-
Agent Isolation: Creating agents is cheap and easy, so you can create narrow, purpose-built agents instead of one agent that can do everything. Each agent gets its own set of tools, its own workspace directory, and its own credentials. An agent that writes code doesn’t need access to your browser sessions. An agent that searches the web doesn’t need access to your files. The less an agent can reach, the less damage it can do if something goes wrong.
-
Isolated Browser Sessions: Each user gets separate browser profiles. Different credentials get separate browser states. One user’s cookies and sessions are never visible to another.
-
Self-hosted by design: Your data lives on your servers. You choose which LLM provider to use, and traffic goes directly from your instance to that provider.
Fast and lightweight
Section titled “Fast and lightweight”The engine is written in Rust. That matters for a platform that manages long-running agent sessions, concurrent tool executions, and real-time streaming.
-
Low resource usage: The engine binary is compact and runs with minimal memory overhead. The database runs embedded in the same process. No separate database server eating resources.
-
Concurrent by default: The engine handles many concurrent agent sessions, SSE streams, and tool executions without breaking a sweat. Each agent can run multiple tasks in parallel.
-
Fast streaming: Responses stream token by token over SSE with negligible latency added by the engine. The bottleneck is the LLM provider, not Frona.
-
Minimal deployment: A single Docker container runs the entire backend. API server, database, scheduler, and tool execution. No microservice sprawl, no message queues, no external cache layers.
Core Concepts
Section titled “Core Concepts”-
Agents are the main building blocks. Each agent has a name, a system prompt that defines its behavior, a model group that determines which LLM it uses, and a list of tools it can access.
-
Memory lets agents remember things across conversations. There are user-scoped facts (shared across all agents) and agent-scoped facts (private to one agent). The platform automatically compacts and deduplicates memories over time.
-
Tools are capabilities you give to agents. Browser automation, web search, file operations, shell commands, voice calls, task scheduling, and more. Tools run server-side and return results to the agent.
-
Tasks represent units of work. They can be direct (run immediately), delegated (from one agent to another), or scheduled (recurring via cron expressions).
-
Chat is how you interact with agents. Each conversation belongs to one agent, but multiple agents can contribute to it through delegation. Messages stream in real-time over Server-Sent Events. When an agent uses a tool during a conversation, you see the tool calls and results inline.
-
Spaces are groups of chats that share the same context. When you link conversations to a space, the platform summarizes those conversations and feeds the context back into new chats. Spaces and workspaces are completely separate concepts.
Architecture at a glance
Section titled “Architecture at a glance”Frona has two main components:
- Engine: a Rust backend that handles agents, chat, tools, authentication, and the database. It uses SurrealDB with a RocksDB storage engine.
- Frontend: a Next.js application that provides the chat interface, agent management, and workspace UI.
External services plug in for specific capabilities:
- Browserless for headless Chrome (browser automation)
- SearXNG for web search
- Twilio for voice calls (optional)
Everything runs in Docker containers. A typical deployment is a single docker-compose.yml that brings up the engine, frontend, and supporting services.