Capacities pioneered object-based note-taking with a beautiful UX and a data model that treats everything as typed objects rather than flat documents. Synap shares that philosophy. The difference is what happens to your data and how much AI can do for you.
Capacities and Synap share a key insight: information should be typed, not flat. In Capacities, you create "objects" — a person, a book, a meeting — with custom properties. In Synap, the equivalent is "entities" — contacts, projects, tasks, articles — with typed profiles and JSONB property schemas.
The difference is who does the work. In Capacities, you manually create objects, assign types, and fill in properties. In Synap, AI does this for you. Paste a link, forward an email, or type a messy note — AI detects the entity type, extracts properties, and creates relationships to existing entities. Same data model philosophy, radically different user experience.
Capacities is a cloud-only SaaS product. Your objects, properties, and relationships live on their servers. There's no self-hosting option and no way to access your data through a standard database protocol. If Capacities changes pricing, removes features, or shuts down, your data is accessible only through their export — which may or may not preserve the full graph.
Synap runs on a dedicated PostgreSQL pod. Your entities, relationships, and AI history live in a real database you control. Self-host it, connect any SQL client, export with pg_dump, or build custom applications on top of your own data. The infrastructure is yours.
Capacities has added some AI features — mainly search and summarization. But you're locked to whatever model they chose, with no way to switch providers or run a local model. The AI assists within the product; it doesn't transform how you work.
Synap's AI is structural. It doesn't just find things — it creates entities, proposes relationships, extracts properties, and builds views. Through the proposal system, every AI mutation is reviewable before it takes effect. And you choose the model: Claude, GPT-4, Gemini, Mistral, or any model on OpenRouter. Switch freely without losing context.