Blog
AI governancetransparencyengineering
The Proposal System: How Synap Keeps AI Transparent
Antoine Servant·March 26, 2026·
8 min

The default model for AI in productivity tools is: the AI does things, and you find out later. It rewrites your text. It moves your files. It categorizes your emails. Sometimes it gets it right. Sometimes it doesn't. You rarely know what it changed or why.

This is the wrong model. AI should be transparent, reviewable, and reversible. That is what Synap's proposal system does.

How proposals work

When an AI agent in Synap wants to modify your data — create an entity, update a property, add a relationship, change a view — it does not execute the change directly. Instead, it creates a proposal: a structured description of what it wants to do and why.

You see the proposal in your workspace. You can review the details: "Create a task entity with title 'Follow up with Sarah', due date March 28, linked to project 'Website Redesign'." You approve it, reject it, or modify it before it takes effect.

Some actions are auto-approved by default — low-risk operations like creating a note entity from your explicit input. Higher-risk operations — deleting entities, modifying relationships, changing view configurations — always require review. You can customize the auto-approve whitelist per workspace.

Why this matters

AI hallucination is not a theoretical risk. It happens daily. An AI agent that miscategorizes a contact, assigns the wrong priority to a task, or creates a duplicate entity can silently corrupt your knowledge base. If you don't notice the mistake immediately, it compounds: other AI actions build on the incorrect data, and your workspace drifts further from reality.

The proposal system creates a checkpoint. Every AI mutation is visible before it takes effect. If the AI misunderstands your intent, you catch it before the data changes. If it gets something wrong, you reject and the workspace stays clean. The error never propagates.

The event chain

Every proposal — approved or rejected — is logged in an immutable event chain. This is not just an audit log. It is a complete history of every AI interaction with your data: what was proposed, when, by which agent, what the reasoning was, and whether you approved it.

This gives you something no other AI tool offers: a full accounting of what AI has done in your workspace. You can review the last week of proposals, see patterns in AI behavior, and adjust permissions based on actual performance. Trust is built on evidence, not promises.

Permission levels

The proposal system is backed by role-based access control. Different agent types have different default permissions. The Orchestrator agent (your primary AI co-pilot) has broader permissions than specialized Persona agents (domain experts that work in isolated branches).

A whitelist called DEFAULT_AUTO_APPROVE defines which operations execute immediately for each role. Everything else goes through the proposal flow. You can override these defaults per workspace — tightening permissions for sensitive data or loosening them for routine operations.

The broader principle

AI governance is not about restricting AI capability. It is about building trust through transparency. An AI agent that proposes changes and shows its reasoning is more useful than one that acts silently — because you can give it more responsibility over time.

When you trust the agent because you have seen it make correct proposals fifty times in a row, you expand its auto-approve whitelist. Trust scales with evidence. The proposal system is the mechanism that generates that evidence.

AI should work for you, not on you. Every change visible. Every action reversible. Every decision yours.

Try Synap

One plan, $50/month. Dedicated pod, any AI model, full sovereignty.

© 2026 Synap Technologies. All rights reserved.