tangram-ai
v0.1.13
Published
Minimal Telegram chatbot built with **TypeScript + LangGraph**, with **multi-provider config** and **OpenAI Responses API** as the default provider.
Readme
Tangram (MVP)
Minimal Telegram chatbot built with TypeScript + LangGraph, with multi-provider config and OpenAI Responses API as the default provider.
Quick Start
- Install deps
npm i- Create config
mkdir -p ~/.tangram && cp config.example.json ~/.tangram/config.jsonEdit ~/.tangram/config.json and set:
channels.telegram.tokenproviders.<yourProviderKey>.apiKey- optionally
providers.<yourProviderKey>.baseUrl
Supported providers:
openai(Responses API)openai-chat-completions(Chat Completions API)anthropic(Messages API, supports custombaseUrl)
- Run
npm run gateway -- --verbose
npm run onboard
npm run gateway -- statusDeploy & Upgrade
Deployment bootstrap is part of onboard.
npm run onboardDuring onboarding, the wizard can optionally install/start a user-level systemd service.
Gateway service operations:
npm run gateway -- status
npm run gateway -- stop
npm run gateway -- restartService stop/restart latency notes:
- gateway now exits immediately on
SIGTERM/SIGINTafter local cleanup - generated user
systemdunit setsTimeoutStopSec=20to avoid long stop hangs
Upgrade and rollback:
npm run upgrade -- --dry-run
npm run upgrade -- --version v0.0.1
npm run rollback -- --to v0.0.1Notes:
upgradeuses global npm install (npm install -g tangram-ai@...) and auto-restarts service- use
--no-restartto skip restart - if
systemd --useris unavailable, run foreground mode:npm run gateway -- --verbose
Release Workflow
This repo includes a baseline release pipeline:
- CI workflow:
.github/workflows/ci.yml- runs on push/PR
- executes
npm ci,npm run lint,npm test,npm run build
- Release workflow:
.github/workflows/release.yml- triggers on tag push
v* - builds project and uploads tarball asset to GitHub Release
- triggers on tag push
- npm publish workflow:
.github/workflows/npm-publish.yml- triggers on tag push
v* - executes
npm ci,npm run build,npm publish
- triggers on tag push
One-time setup for npm CI publish
- Configure npm Trusted Publishing for this GitHub repository
- Ensure workflow permission includes
id-token: write(already configured) - No
NPM_TOKENsecret is required
After this setup, pushing a version tag (for example v0.0.2) will publish tangram-ai to npm automatically.
Local release commands
- Create release commit + tag (pass through any npm version target):
npm run release -- 0.1.0
npm run release -- patch
npm run release -- minor
npm run release -- majorAfter npm run release -- <target> completes, push branch and tag:
git push origin master
git push origin vX.Y.ZPushing the tag triggers GitHub Actions release creation automatically.
Onboard Wizard
Run npm run onboard for an interactive setup that:
- asks for provider/API/Telegram settings
- applies developer-default permissions (shell enabled but restricted)
- initializes
~/.tangramdirectories and baseline files - initializes runtime directories under
~/.tangram/app - can install/start user-level
systemdservice - handles existing files one by one (
overwrite/skip/backup then overwrite)
Memory (Shared)
Shared memory lives under the configured workspace directory (default: ~/.tangram/workspace):
- Long-term memory:
memory/memory.md - Daily notes:
memory/YYYY-MM-DD.md
Telegram commands:
/stopstop current running request in this chat/memoryshow memory context/remember <text>append to today's daily memory/remember_long <text>append to long-term memory/newstart a new chat session (clear stored session history for current chat)/whoamishow current Telegram user/chat identity/skilllist installed skills currently discovered by runtime
Telegram UX behaviors:
- bot sends
typingaction while processing - during tool-calling loops, progress is shown via a single draft-like status message updated in place; new
xNand explanation lines are appended in the same message (⏳ 正在调用工具处理你的请求… xN/💬 ...), controlled bychannels.telegram.progressUpdates(defaulttrue)
Session Persistence
Gateway now persists per-thread conversation sessions to JSONL files, so restarts can restore recent context.
- Default directory:
~/.tangram/workspace/sessions - File format: one JSON record per line (user/assistant only)
- Restore policy: load latest
restoreMessagesrecords for the currentthreadIdbefore each invoke
Config:
{
"agents": {
"defaults": {
"session": {
"enabled": true,
"dir": "~/.tangram/workspace/sessions",
"restoreMessages": 100,
"persistAssistantEmpty": false
}
}
}
}Memory Tools (LLM)
The agent exposes function tools to the model (via OpenAI Responses API):
memory_searchsearch shared memory filesfile_readread local skill/content filesfile_writewrite local filesfile_editedit files by targeted text replacementbashexecute CLI commands whenagents.defaults.shell.enabled=truecron_scheduleschedule one-time/repeating callbackscron_listlist scheduled callbackscron_cancelcancel scheduled callbacks
Memory writes should be done via file_write / file_edit directly to memory files in workspace.
The LangGraph workflow also runs a post-reply "memory reflection" node that can automatically summarize the latest turn into memory using a strict JSON format prompt.
Skills Metadata
The runtime discovers local skills and injects a compact skills list into the model instructions, so the model can decide which skill to open/use.
By default it scans:
~/.tangram/skills
You can customize via agents.defaults.skills:
{
"agents": {
"defaults": {
"skills": {
"enabled": true,
"roots": [
"~/.tangram/skills"
],
"maxSkills": 40,
"hotReload": {
"enabled": true,
"debounceMs": 800,
"logDiff": true
}
}
}
}
}Hot reload behavior:
- skill directory/file changes are detected with filesystem watchers
- reload is debounced (
hotReload.debounceMs) to avoid noisy rapid rescans - updates apply globally to the next LLM execution without restarting gateway
- when
hotReload.logDiff=true, gateway logs added/removed/changed skills
file_read / file_write / file_edit use agents.defaults.files config for access control.
{
"agents": {
"defaults": {
"files": {
"enabled": true,
"fullAccess": false,
"roots": ["~/.tangram"]
}
}
}
}Set files.fullAccess=true to disable path restrictions and allow file access to any local path.
Shell Tool (Optional)
Enable shell execution only when needed:
{
"agents": {
"defaults": {
"shell": {
"enabled": true,
"fullAccess": false,
"roots": ["~/.tangram"],
"defaultCwd": "~/.tangram/workspace",
"timeoutMs": 120000,
"maxOutputChars": 12000
}
}
}
}When enabled, the model can call a bash tool with argv form commands (e.g. ['bash','-lc','ls -la']), constrained to allowed roots.
bash tool supports optional background: true to run asynchronously and return PID immediately. In background mode, stdout/stderr are not captured by the tool.
Example tool args:
{
"command": ["bash", "-lc", "sleep 30"],
"cwd": "~/.tangram/workspace",
"timeoutMs": 120000,
"background": true
}Set shell.fullAccess=true to disable cwd root restrictions and allow any local path.
Heartbeat (Optional)
Heartbeat periodically reads HEARTBEAT.md and triggers a model run with that content.
{
"agents": {
"defaults": {
"heartbeat": {
"enabled": true,
"intervalSeconds": 300,
"filePath": "~/.tangram/workspace/HEARTBEAT.md",
"threadId": "heartbeat"
}
}
}
}Cron Scheduler
Cron scheduler runs due tasks and sends their payload to the model at the scheduled time.
{
"agents": {
"defaults": {
"cron": {
"enabled": true,
"tickSeconds": 15,
"storePath": "~/.tangram/workspace/cron-tasks.json",
"defaultThreadId": "cron"
}
}
}
}Model-facing cron tools:
cron_scheduleset run time, repeat mode, andcallbackPrompt(sent to model when due, not directly to user)cron_schedule_localset local timezone schedules (e.g. daily 09:00 Asia/Shanghai) andcallbackPromptcron_listinspect pending taskscron_cancelremove a task by id
Compatibility note:
- old
messagefield is still accepted for backward compatibility, butcallbackPromptis recommended
Config
This project supports multiple provider instances. Example:
{
"providers": {
"openai": {
"type": "openai",
"apiKey": "sk-...",
"baseUrl": "https://api.openai.com/v1",
"defaultModel": "gpt-4.1-mini"
},
"anthropic": {
"type": "anthropic",
"apiKey": "sk-ant-...",
"baseUrl": "https://api.anthropic.com",
"defaultModel": "claude-3-5-sonnet-latest"
},
"openai_chat": {
"type": "openai-chat-completions",
"apiKey": "sk-...",
"baseUrl": "https://api.openai.com/v1",
"defaultModel": "gpt-4.1-mini"
},
"local": {
"type": "openai",
"apiKey": "dummy",
"baseUrl": "http://localhost:8000/v1",
"defaultModel": "meta-llama/Llama-3.1-8B-Instruct"
}
},
"agents": {
"defaults": {
"provider": "openai",
"recursionLimit": 25,
"temperature": 0.7,
"systemPrompt": "You are a helpful assistant."
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "123456:ABCDEF...",
"allowFrom": []
}
}
}agents.defaults.recursionLimit controls LangGraph recursion depth (default 25).
Config lookup order:
--config <path>TANGRAM_CONFIG~/.tangram/config.json./config.json(local fallback)
