npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

tangram-ai

v0.1.13

Published

Minimal Telegram chatbot built with **TypeScript + LangGraph**, with **multi-provider config** and **OpenAI Responses API** as the default provider.

Readme

Tangram (MVP)

Minimal Telegram chatbot built with TypeScript + LangGraph, with multi-provider config and OpenAI Responses API as the default provider.

Quick Start

  1. Install deps
npm i
  1. Create config
mkdir -p ~/.tangram && cp config.example.json ~/.tangram/config.json

Edit ~/.tangram/config.json and set:

  • channels.telegram.token
  • providers.<yourProviderKey>.apiKey
  • optionally providers.<yourProviderKey>.baseUrl

Supported providers:

  • openai (Responses API)
  • openai-chat-completions (Chat Completions API)
  • anthropic (Messages API, supports custom baseUrl)
  1. Run
npm run gateway -- --verbose
npm run onboard
npm run gateway -- status

Deploy & Upgrade

Deployment bootstrap is part of onboard.

npm run onboard

During onboarding, the wizard can optionally install/start a user-level systemd service.

Gateway service operations:

npm run gateway -- status
npm run gateway -- stop
npm run gateway -- restart

Service stop/restart latency notes:

  • gateway now exits immediately on SIGTERM/SIGINT after local cleanup
  • generated user systemd unit sets TimeoutStopSec=20 to avoid long stop hangs

Upgrade and rollback:

npm run upgrade -- --dry-run
npm run upgrade -- --version v0.0.1
npm run rollback -- --to v0.0.1

Notes:

  • upgrade uses global npm install (npm install -g tangram-ai@...) and auto-restarts service
  • use --no-restart to skip restart
  • if systemd --user is unavailable, run foreground mode: npm run gateway -- --verbose

Release Workflow

This repo includes a baseline release pipeline:

  • CI workflow: .github/workflows/ci.yml
    • runs on push/PR
    • executes npm ci, npm run lint, npm test, npm run build
  • Release workflow: .github/workflows/release.yml
    • triggers on tag push v*
    • builds project and uploads tarball asset to GitHub Release
  • npm publish workflow: .github/workflows/npm-publish.yml
    • triggers on tag push v*
    • executes npm ci, npm run build, npm publish

One-time setup for npm CI publish

  1. Configure npm Trusted Publishing for this GitHub repository
  2. Ensure workflow permission includes id-token: write (already configured)
  3. No NPM_TOKEN secret is required

After this setup, pushing a version tag (for example v0.0.2) will publish tangram-ai to npm automatically.

Local release commands

  • Create release commit + tag (pass through any npm version target):
npm run release -- 0.1.0
npm run release -- patch
npm run release -- minor
npm run release -- major

After npm run release -- <target> completes, push branch and tag:

git push origin master
git push origin vX.Y.Z

Pushing the tag triggers GitHub Actions release creation automatically.

Onboard Wizard

Run npm run onboard for an interactive setup that:

  • asks for provider/API/Telegram settings
  • applies developer-default permissions (shell enabled but restricted)
  • initializes ~/.tangram directories and baseline files
  • initializes runtime directories under ~/.tangram/app
  • can install/start user-level systemd service
  • handles existing files one by one (overwrite / skip / backup then overwrite)

Memory (Shared)

Shared memory lives under the configured workspace directory (default: ~/.tangram/workspace):

  • Long-term memory: memory/memory.md
  • Daily notes: memory/YYYY-MM-DD.md

Telegram commands:

  • /stop stop current running request in this chat
  • /memory show memory context
  • /remember <text> append to today's daily memory
  • /remember_long <text> append to long-term memory
  • /new start a new chat session (clear stored session history for current chat)
  • /whoami show current Telegram user/chat identity
  • /skill list installed skills currently discovered by runtime

Telegram UX behaviors:

  • bot sends typing action while processing
  • during tool-calling loops, progress is shown via a single draft-like status message updated in place; new xN and explanation lines are appended in the same message (⏳ 正在调用工具处理你的请求… xN / 💬 ...), controlled by channels.telegram.progressUpdates (default true)

Session Persistence

Gateway now persists per-thread conversation sessions to JSONL files, so restarts can restore recent context.

  • Default directory: ~/.tangram/workspace/sessions
  • File format: one JSON record per line (user/assistant only)
  • Restore policy: load latest restoreMessages records for the current threadId before each invoke

Config:

{
  "agents": {
    "defaults": {
      "session": {
        "enabled": true,
        "dir": "~/.tangram/workspace/sessions",
        "restoreMessages": 100,
        "persistAssistantEmpty": false
      }
    }
  }
}

Memory Tools (LLM)

The agent exposes function tools to the model (via OpenAI Responses API):

  • memory_search search shared memory files
  • file_read read local skill/content files
  • file_write write local files
  • file_edit edit files by targeted text replacement
  • bash execute CLI commands when agents.defaults.shell.enabled=true
  • cron_schedule schedule one-time/repeating callbacks
  • cron_list list scheduled callbacks
  • cron_cancel cancel scheduled callbacks

Memory writes should be done via file_write / file_edit directly to memory files in workspace.

The LangGraph workflow also runs a post-reply "memory reflection" node that can automatically summarize the latest turn into memory using a strict JSON format prompt.

Skills Metadata

The runtime discovers local skills and injects a compact skills list into the model instructions, so the model can decide which skill to open/use.

By default it scans:

  • ~/.tangram/skills

You can customize via agents.defaults.skills:

{
  "agents": {
    "defaults": {
      "skills": {
        "enabled": true,
        "roots": [
          "~/.tangram/skills"
        ],
        "maxSkills": 40,
        "hotReload": {
          "enabled": true,
          "debounceMs": 800,
          "logDiff": true
        }
      }
    }
  }
}

Hot reload behavior:

  • skill directory/file changes are detected with filesystem watchers
  • reload is debounced (hotReload.debounceMs) to avoid noisy rapid rescans
  • updates apply globally to the next LLM execution without restarting gateway
  • when hotReload.logDiff=true, gateway logs added/removed/changed skills

file_read / file_write / file_edit use agents.defaults.files config for access control.

{
  "agents": {
    "defaults": {
      "files": {
        "enabled": true,
        "fullAccess": false,
        "roots": ["~/.tangram"]
      }
    }
  }
}

Set files.fullAccess=true to disable path restrictions and allow file access to any local path.

Shell Tool (Optional)

Enable shell execution only when needed:

{
  "agents": {
    "defaults": {
      "shell": {
        "enabled": true,
        "fullAccess": false,
        "roots": ["~/.tangram"],
        "defaultCwd": "~/.tangram/workspace",
        "timeoutMs": 120000,
        "maxOutputChars": 12000
      }
    }
  }
}

When enabled, the model can call a bash tool with argv form commands (e.g. ['bash','-lc','ls -la']), constrained to allowed roots.

bash tool supports optional background: true to run asynchronously and return PID immediately. In background mode, stdout/stderr are not captured by the tool.

Example tool args:

{
  "command": ["bash", "-lc", "sleep 30"],
  "cwd": "~/.tangram/workspace",
  "timeoutMs": 120000,
  "background": true
}

Set shell.fullAccess=true to disable cwd root restrictions and allow any local path.

Heartbeat (Optional)

Heartbeat periodically reads HEARTBEAT.md and triggers a model run with that content.

{
  "agents": {
    "defaults": {
      "heartbeat": {
        "enabled": true,
        "intervalSeconds": 300,
        "filePath": "~/.tangram/workspace/HEARTBEAT.md",
        "threadId": "heartbeat"
      }
    }
  }
}

Cron Scheduler

Cron scheduler runs due tasks and sends their payload to the model at the scheduled time.

{
  "agents": {
    "defaults": {
      "cron": {
        "enabled": true,
        "tickSeconds": 15,
        "storePath": "~/.tangram/workspace/cron-tasks.json",
        "defaultThreadId": "cron"
      }
    }
  }
}

Model-facing cron tools:

  • cron_schedule set run time, repeat mode, and callbackPrompt (sent to model when due, not directly to user)
  • cron_schedule_local set local timezone schedules (e.g. daily 09:00 Asia/Shanghai) and callbackPrompt
  • cron_list inspect pending tasks
  • cron_cancel remove a task by id

Compatibility note:

  • old message field is still accepted for backward compatibility, but callbackPrompt is recommended

Config

This project supports multiple provider instances. Example:

{
  "providers": {
    "openai": {
      "type": "openai",
      "apiKey": "sk-...",
      "baseUrl": "https://api.openai.com/v1",
      "defaultModel": "gpt-4.1-mini"
    },
    "anthropic": {
      "type": "anthropic",
      "apiKey": "sk-ant-...",
      "baseUrl": "https://api.anthropic.com",
      "defaultModel": "claude-3-5-sonnet-latest"
    },
    "openai_chat": {
      "type": "openai-chat-completions",
      "apiKey": "sk-...",
      "baseUrl": "https://api.openai.com/v1",
      "defaultModel": "gpt-4.1-mini"
    },
    "local": {
      "type": "openai",
      "apiKey": "dummy",
      "baseUrl": "http://localhost:8000/v1",
      "defaultModel": "meta-llama/Llama-3.1-8B-Instruct"
    }
  },
  "agents": {
    "defaults": {
      "provider": "openai",
      "recursionLimit": 25,
      "temperature": 0.7,
      "systemPrompt": "You are a helpful assistant."
    }
  },
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "123456:ABCDEF...",
      "allowFrom": []
    }
  }
}

agents.defaults.recursionLimit controls LangGraph recursion depth (default 25).

Config lookup order:

  • --config <path>
  • TANGRAM_CONFIG
  • ~/.tangram/config.json
  • ./config.json (local fallback)