cyndra-agent
v1.2.67
Published
Personal Cyndra AI assistant. Lightweight, secure, customizable.
Readme
Why We Built Cyndra Agent
We wanted an AI assistant that could actually run things on our behalf — manage messages, automate tasks, connect to the tools we use — without handing over the keys to a system we couldn't understand or trust. Existing solutions were either too complex to audit or too locked-down to be useful.
Cyndra Agent is built to be small enough to understand and secure enough to trust. One process, a handful of files, and every agent runs in its own isolated Linux container. No shared memory, no application-level permission checks — real OS-level isolation.
Quick Start
npm install cyndra-agent
cd cyndra-agent
cyndraThe cyndra wrapper auto-detects first-run and launches setup in Cyndra-branded mode. On subsequent runs, cyndra drops you into the interactive session. If setup doesn't start automatically, run /setup once you're in.
Cyndra AI handles everything: dependencies, authentication, container setup, and service configuration.
Note: Commands prefixed with
/(like/setup,/add-whatsapp) are skills that run inside the Cyndra AI session — type them in the prompt, not in your regular terminal. Cyndra AI runs on the Claude Agent SDK; if you don't have the underlying CLI installed, get it at claude.com/product/claude-code, then usecyndrato launch.
Philosophy
Small enough to understand. One process, a few source files and no microservices. If you want to understand the full Cyndra Agent codebase, just ask Cyndra AI to walk you through it.
Secure by isolation. Agents run in Linux containers (Apple Container on macOS, or Docker) and they can only see what's explicitly mounted. Bash access is safe because commands run inside the container, not on your host.
Built for the individual user. Cyndra Agent isn't a monolithic framework; it's software that fits each user's exact needs. Instead of becoming bloatware, Cyndra Agent is designed to be bespoke. You make your own fork and have Cyndra AI modify it to match your needs.
Customization = code changes. No configuration sprawl. Want different behavior? Modify the code. The codebase is small enough that it's safe to make changes.
AI-native.
- No installation wizard; Cyndra AI guides setup.
- No monitoring dashboard; ask Claude what's happening.
- No debugging tools; describe the problem and Claude fixes it.
Skills over features. Instead of adding features (e.g. support for Telegram) to the codebase, contributors submit claude code skills like /add-telegram that transform your fork. You end up with clean code that does exactly what you need.
Best harness, best model. Cyndra Agent runs on the Claude Agent SDK, which means you're running Cyndra AI directly. Cyndra AI is highly capable and its coding and problem-solving capabilities allow it to modify and expand Cyndra Agent and tailor it to each user.
What It Supports
- Multi-channel messaging - Talk to your assistant from WhatsApp, Telegram, Discord, Slack, or Gmail. Add channels with skills like
/add-whatsappor/add-telegram. Run one or many at the same time. - Isolated group context - Each group has its own
CLAUDE.mdmemory, isolated filesystem, and runs in its own container sandbox with only that filesystem mounted to it. - Main channel - Your private channel (self-chat) for admin control; every group is completely isolated
- Scheduled tasks - Recurring jobs that run Claude and can message you back
- Web access - Search and fetch content from the Web
- Container isolation - Agents are sandboxed in Docker (macOS/Linux), Docker Sandboxes (micro VM isolation), or Apple Container (macOS)
- Credential security - Agents never hold raw API keys. Outbound requests route through OneCLI's Agent Vault, which injects credentials at request time and enforces per-agent policies and rate limits.
- Agent Swarms - Spin up teams of specialized agents that collaborate on complex tasks
- Optional integrations - Add Gmail (
/add-gmail) and more via skills
Usage
Talk to your assistant with the trigger word (default: @Andy):
@Andy send an overview of the sales pipeline every weekday morning at 9am (has access to my Obsidian vault folder)
@Andy review the git history for the past week each Friday and update the README if there's drift
@Andy every Monday at 8am, compile news on AI developments from Hacker News and TechCrunch and message me a briefingFrom the main channel (your self-chat), you can manage groups and tasks:
@Andy list all scheduled tasks across groups
@Andy pause the Monday briefing task
@Andy join the Family Chat groupCustomizing
Cyndra Agent doesn't use configuration files. To make changes, just tell Cyndra AI what you want:
- "Change the trigger word to @Bob"
- "Remember in the future to make responses shorter and more direct"
- "Add a custom greeting when I say good morning"
- "Store conversation summaries weekly"
Or run /customize for guided changes.
The codebase is small enough that Claude can safely modify it.
Contributing
Don't add features. Add skills.
If you want to add Telegram support, don't create a PR that adds Telegram to the core codebase. Instead, fork Cyndra Agent, make the code changes on a branch, and open a PR. We'll create a skill/telegram branch from your PR that other users can merge into their fork.
Users then run /add-telegram on their fork and get clean code that does exactly what they need, not a bloated system trying to support every use case.
RFS (Request for Skills)
Skills we'd like to see:
Communication Channels
/add-signal- Add Signal as a channel
Requirements
- macOS, Linux, or Windows (via WSL2)
- Node.js 20+
- Cyndra AI
- Apple Container (macOS) or Docker (macOS/Linux)
Architecture
Channels --> SQLite --> Polling loop --> Container (Claude Agent SDK) --> ResponseSingle Node.js process. Channels are added via skills and self-register at startup — the orchestrator connects whichever ones have credentials present. Agents execute in isolated Linux containers with filesystem isolation. Only mounted directories are accessible. Per-group message queue with concurrency control. IPC via filesystem.
For the full architecture details, see docs/SPEC.md.
Key files:
src/index.ts- Orchestrator: state, message loop, agent invocationsrc/channels/registry.ts- Channel registry (self-registration at startup)src/ipc.ts- IPC watcher and task processingsrc/router.ts- Message formatting and outbound routingsrc/group-queue.ts- Per-group queue with global concurrency limitsrc/container-runner.ts- Spawns streaming agent containerssrc/task-scheduler.ts- Runs scheduled taskssrc/db.ts- SQLite operations (messages, groups, sessions, state)groups/*/CLAUDE.md- Per-group memory
FAQ
Why Docker?
Docker provides cross-platform support (macOS, Linux and even Windows via WSL2) and a mature ecosystem. On macOS, you can optionally switch to Apple Container via /convert-to-apple-container for a lighter-weight native runtime. For additional isolation, Docker Sandboxes run each container inside a micro VM.
Can I run this on Linux or Windows?
Yes. Docker is the default runtime and works on macOS, Linux, and Windows (via WSL2). Just run /setup.
Is this secure?
Agents run in containers, not behind application-level permission checks. They can only access explicitly mounted directories. Credentials never enter the container — outbound API requests route through OneCLI's Agent Vault, which injects authentication at the proxy level and supports rate limits and access policies. You should still review what you're running, but the codebase is small enough that you actually can. See docs/SECURITY.md for the full security model.
Why no configuration files?
We don't want configuration sprawl. Every user should customize Cyndra Agent so that the code does exactly what they want, rather than configuring a generic system. If you prefer having config files, you can tell Claude to add them.
Can I use third-party or open-source models?
Yes. Cyndra Agent supports any Claude API-compatible model endpoint. Set these environment variables in your .env file:
ANTHROPIC_BASE_URL=https://your-api-endpoint.com
ANTHROPIC_AUTH_TOKEN=your-token-hereThis allows you to use:
- Local models via Ollama with an API proxy
- Open-source models hosted on Together AI, Fireworks, etc.
- Custom model deployments with Anthropic-compatible APIs
Note: The model must support the Anthropic API format for best compatibility.
How do I debug issues?
Ask Cyndra AI. "Why isn't the scheduler running?" "What's in the recent logs?" "Why did this message not get a response?" That's the AI-native approach that underlies Cyndra Agent.
Why isn't the setup working for me?
If you have issues, during setup, Claude will try to dynamically fix them. If that doesn't work, run claude, then run /debug. If Claude finds an issue that is likely affecting other users, open a PR to modify the setup SKILL.md.
What changes will be accepted into the codebase?
Only security fixes, bug fixes, and clear improvements will be accepted to the base configuration. That's all.
Everything else (new capabilities, OS compatibility, hardware support, enhancements) should be contributed as skills.
This keeps the base system minimal and lets every user customize their installation without inheriting features they don't want.
Changelog
See CHANGELOG.md for breaking changes.
License
MIT
