npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@riktar/slang

v0.7.5

Published

SLANG — Super Language for Agent Negotiation & Governance

Readme


The entire language in 30 seconds.

flow "research" {
  agent Researcher {
    tools: [web_search]
    stake gather(topic: "quantum computing") -> @Analyst
  }
  agent Analyst {
    await data <- @Researcher
    stake analyze(data) -> @out
    commit
  }
  converge when: all_committed
}

That's it. Three primitives. Your PM can read it. Your analyst can edit it. Your LLM can run it. No Python, no TypeScript — just intent.

Everything else follows from these:

| Primitive | What it does | |-----------|---------| | stake | Produce content and send it to another agent (or execute locally) | | await | Block until another agent sends you data | | commit | Accept the result and stop |

Plus control flow: when/else conditionals, let/set variables, repeat until loops.


Two ways to use SLANG

Here's the thing: the same .slang file runs two different ways. Paste it into ChatGPT? Works. CLI? Works. API? Works. You pick.

🧠 Zero-Setup Mode

No install, no API key, no runtime.

  1. Copy the system prompt
  2. Paste into ChatGPT, Claude, Gemini (pick any LLM)
  3. Paste your .slang flow
  4. Done

The LLM becomes your runtime. Perfect for non-developers, quick prototyping, or when you just want it to work — no install, no code, no friction.

⚡ CLI / API / MCP Mode

Full runtime. Real tools. 300+ models via OpenRouter. Parallel execution.

npm install -g @riktar/slang
slang init my-project && cd my-project
slang run hello.slang

Checkpoint and resume. Deadlock detection. Structured output. Everything you need for real workflows.

| Feature | Zero-Setup | CLI / API / MCP | |---------|:---:|:---:| | Parse & execute flows | ✅ | ✅ | | All primitives (stake, await, commit, escalate) | ✅ | ✅ | | Conditionals (when / if / else) | ✅ | ✅ | | Variables (let / set) | ✅ | ✅ | | Loops (repeat until) | ✅ | ✅ | | deliver: post-convergence hooks | ❌ | ✅ real handlers | | model: multi-provider routing | ❌ single LLM | ✅ 300+ models | | tools: functional execution | ❌ simulated | ✅ real handlers | | Parallel agents | ❌ sequential | ✅ Promise.all | | retry: with exponential backoff | ❌ | ✅ | | output: structured contracts | ✅ best-effort | ✅ enforced | | Checkpoint & resume | ❌ | ✅ | | Static analysis & deadlock detection | ❌ | ✅ | | IDE support (LSP, syntax highlighting) | ❌ | ✅ | | Web playground | ❌ | ✅ |

Start with zero-setup to prototype. Move to CLI or API when you're ready to ship. Same file both ways.


Quick Start

1. Install

npm install -g @riktar/slang

2. Scaffold a project

slang init my-project
cd my-project

This creates hello.slang, research.slang, tools.js, and .env.example. Everything you need.

3. Configure your API key

cp .env.example .env    # edit with your API key (SLANG loads it automatically)

4. Run your first flow

slang run hello.slang                        # echo adapter (no API key needed)
slang run hello.slang --adapter openrouter   # uses OPENROUTER_API_KEY from .env

5. Open the playground

slang playground
# Opens http://localhost:5174 - write, visualize, run flows in the browser

Why SLANG?

It's not a framework. It's a shared language for your team.

SLANG is Super Language for Agent Negotiation & Governance

LangChain, CrewAI, AutoGen — they're SDKs. Python/TypeScript libraries. Only developers can use them. Everyone else has to wait, ask, or guess what the workflow actually does.

SLANG is a language anyone can read and write. Your PM defines the workflow. Your analyst tweaks the logic. Your developer hooks it up to real tools. Everyone works on the same .slang file.

| SQL | SLANG | |-----|-------| | Doesn't replace C/Java | Doesn't replace Python/TypeScript | | Non-devs write queries | Non-devs write workflows | | Readable by the whole team | Same — anyone reads and edits it | | LLMs generate it | LLMs generate it | | Not complete. That's the point | Not general-purpose. That's the point |

No code needed.

Describe what your agents should do. SLANG reads like plain English:

flow "hybrid-analysis" {
  agent Researcher {
    tools: [web_search]
    stake gather(topic: "quantum computing") -> @Analyst
    commit
  }
  agent Analyst {
    await data <- @Researcher
    stake analyze(data) -> @out
    commit
  }
  converge when: all_committed
}

Paste it into ChatGPT and it runs. Use the CLI for production with 300+ models via OpenRouter. Same file, zero vendor lock-in.

The .slang file is the documentation.

Read this flow out loud:

"The Researcher stakes gather on "quantum computer" topic and sends it to the Analyst. The Analyst awaits the data, analyzes it, and sends the output to the user. The flow stop when the Researcher and the Analyst have committed their job"

No diagrams, no comments, no docs needed. Show the .slang file in a meeting and everyone understands what the AI workflow does.

Who is SLANG for?

| Audience | Why | |----------|-----| | PMs & business people | Write AI workflows without code. Describe what agents should do, paste into ChatGPT, and it runs. Your automation, your way. | | Analysts & ops | Edit and validate workflows yourself. No waiting for engineering. Review the logic, tweak parameters, run it. | | Developers | 10 lines, 60 seconds, it runs. Skip the boilerplate, hook up real tools when you need them. | | Teams | One .slang file everyone can read. The PM writes it, the dev ships it, the analyst audits it. Same source of truth. |

SDK comparison

| | SDK (LangChain, CrewAI) | SLANG | |---|---|---| | Who can use it | Developers only | Anyone on the team | | Time to first workflow | Hours | 60 seconds | | Who reads / reviews it | Developers only | PMs, analysts, developers, LLMs | | LLMs can generate it | No (boilerplate is messy) | Yes (text-to-SLANG like text-to-SQL) | | Runtime needed | Yes, always | Optional — paste into ChatGPT and it works | | Docs | Separate files | The .slang file is the documentation |

SLANG isn't trying to replace SDKs. Like SQL didn't replace Java. It's a different category: workflows that everyone on the team can own.


IDE Support

See docs/IDE.md for VS Code, Neovim, Vim, Sublime, JetBrains, and other LSP-compatible editors.


Playground

See docs/PLAYGROUND.md for web editor features and usage.


CLI

See docs/CLI.md for all commands, options, and environment variables.


API

See docs/API.md for programmatic usage, adapters, tools, and checkpointing.


MCP Server

See docs/MCP.md for Model Context Protocol integration with Claude Desktop.


How it works

Your .slang file goes through these stages:

Source → Lexer → Parser → AST → Resolver → Graph → Runtime → Result

| Stage | What happens | |-------|-------------| | Lexer | Breaks source into tokens (with line/column info) | | Parser | Recursive-descent, builds typed AST, recovers from errors | | Resolver | Builds dependency graph, checks for deadlocks | | Runtime | Schedules agents, mailbox, parallel dispatch, tools | | Adapters | Connect to LLMs (OpenAI, Anthropic, OpenRouter, etc) | | LSP | Language Server: diagnostics, completion, go-to-def, hover | | Playground | Web editor (React + Vite) with visualization, tests |

Examples

Examples are in the examples/ folder. Each demonstrates different patterns and features:

slang run examples/hello.slang                                     # Minimal flow
slang run examples/research.slang --adapter openrouter --tools examples/tools.js
slang check examples/broadcast.slang                               # Dependency analysis

See each .slang file for inline documentation.

Contributing

See CONTRIBUTING.md for guidelines.