← Back to Guides

AI Features — Agents, Commands, and Threads

Synap's AI is not a chatbot bolted on — it is a co-founder woven into every layer of the system. Two types of agents, persistent threads, reusable commands, and a secure protocol that ensures AI never touches your database directly.

Two-layer agent architecture

Synap uses a two-layer agent system designed for both breadth and depth:

OrchestratorAgent

The OrchestratorAgent is your co-founder. It has access to the full tool suite — entity creation, view management, search, document editing, profile management, and more. When you open a new AI thread and start talking, the OrchestratorAgent handles the conversation. It can create tasks, organize your data, build dashboards, search across your entire workspace, and chain multiple operations together in a single response.

The OrchestratorAgent operates workspace-wide. It sees your profiles, understands your data structure, and can reason across entities, views, and channels. It discovers your workspace schema at runtime by querying profiles and bento configurations — nothing is hardcoded.

PersonaAgent

PersonaAgents are domain experts that operate in branches. When a conversation goes in a specialized direction — deep research, technical writing, code analysis — you branch the thread and a PersonaAgent takes over. PersonaAgents have narrower scope (branch-only) but deeper expertise in their domain. They can be configured with custom instructions, specific tool access, and fine-tuned behavior for their specialty.

Threads — persistent AI conversations

Every AI conversation in Synap is a thread — a persistent, context-aware dialogue that you can return to at any time. Unlike chat interfaces that lose context when you close the window, Synap threads maintain session-scoped memory. The AI remembers what you discussed, what actions it took, and what decisions were made.

Session memory uses an automatic compaction engine. For your personal thread, context persists for 4 hours. For other threads, it persists for 30 minutes. When the session expires, the AI summarizes the key points so that the next conversation starts with relevant context rather than a blank slate. Mid-session, if the conversation gets long, the compaction engine summarizes older messages to stay within token budgets while preserving the essential information.

Commands — reusable AI shortcuts

Commands are saved prompt templates with configurable arguments. Instead of typing the same complex instruction every time, you save it as a command and run it with a single action. Commands can include template variables that get filled in at execution time.

Examples of commands: "Summarize this document in 3 bullet points," "Create a task from this email with appropriate priority," "Generate a weekly report of completed tasks grouped by project." Commands run in the context of the current thread, so they have access to the full conversation history and workspace data.

Model-agnostic design

Synap does not lock you into a single AI provider. Through OpenRouter integration, you can use Claude, GPT-4, Gemini, Mistral, Llama, or any other model. Each agent can be configured with its own model selection and temperature setting via AgentConfig.modelId and temperature. Switch models freely — your conversation history, commands, and workspace context stay intact regardless of which model powers the responses.

Model resolution follows a priority chain: agent-specific override, then workspace default, then environment fallback. This means you can set a default model for your workspace while letting specific agents use specialized models better suited to their domain.

The Hub Protocol — secure data access

AI agents never touch the database directly. Every data operation goes through the Hub Protocol — a REST interface between the intelligence service and the backend. The Hub Protocol has 20 sub-routers covering entities, views, channels, documents, search, profiles, and more. Bearer token authentication ensures that every request is authorized.

This architectural boundary is intentional. It means you can run the intelligence service on a different server than your data pod, use a third-party intelligence service, or swap out the AI layer entirely — your data stays protected behind the same API that any external client would use. The Hub Protocol also enables the proposal system: when an agent tries to modify data, the backend's checkPermissionOrPropose() gate decides whether the action executes immediately or becomes a reviewable proposal.

Prompt architecture

Agent behavior is composed from three layers: the agent prompt (who the agent is and when it activates), skill files (how to perform specific tasks), and runtime context (what the current workspace looks like, conversation history, and user preferences). This composable architecture means agents learn new capabilities by adding skill files, not by rewriting their core prompts.

Technical reference

For architecture details and implementation specifics, see the Multi-agent system and AI architecture in the technical docs.


Related guides

→ The Event Chain — Every Write, Recorded Forever→ Channels — Where Data Meets Conversation→ Commands — Reusable AI Shortcuts
© 2026 Synap Technologies. All rights reserved.