npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@rong/agentscript

v0.1.7

Published

A DSL for explicit, scoped, auditable LLM context

Readme

AgentScript

Agent context as code. use declares what the model can see, with optional labels for prompt sections. generate defines the only LLM call site and optional output shape. Zero runtime dependencies. TypeScript-powered.

use scratch.summary max 2k as observations
generate({
    input: "Answer from observations"
}) -> {
    ok boolean
    text string
}

License: MIT Zero Dependencies Node >= 22.5

中文版

Install

npm install -g @rong/agentscript

Then run the CLI:

agentscript --help

Or run without installing:

npx @rong/agentscript run recipes/code-review.as --input '{"path":"src"}'

Quick start

# Real model call by default
agentscript run recipes/summarize-file.as --input '{"path":"README.md"}'

# Mock override for deterministic local checks
agentscript run recipes/summarize-file.as --input '{"path":"README.md"}' --mock

# Dry-run inspection without model calls
agentscript run recipes/summarize-file.as --input '{"path":"README.md"}' --dry-run

# Audit trace
agentscript run recipes/summarize-file.as --input '{"path":"README.md"}' --trace

The recipes/summarize-file.as recipe reads a local file, includes it in the LLM context, and returns a structured summary:

-- recipes/summarize-file.as
import llm Qwen from "ollama://localhost:11434/qwen3.6"
import tool File from "file://workspace"

main agent FileSummarizer {
    model Qwen
    role "Technical Writer"
    description "Read one local file and produce a useful structured summary."

    main func(input { path string }) {
        content = File.read({
            path: input.path
        })
        use input.path as "source path"
        use content max 8k as "file content"

        generate({
            input: "Summarize the file for a busy teammate",
            max_output: 1000
        }) -> {
            title string
            summary string
            key_points list[string]
            action_items list[string]
        }
    }
}

Expected output (with mock LLM):

{
  "value": {
    "title": "",
    "summary": "",
    "key_points": [],
    "action_items": []
  },
  "trace": [ ... ]
}

Trace makes the prompt inputs auditable:

use "source path"       value="README.md"
use "file content"      budget=8k clipped=true
generate                input="Summarize the file for a busy teammate"
schema                  title, summary, key_points, action_items
validation              ok

Use --mock when you want deterministic local output. Without --mock or --dry-run, AgentScript calls the configured real model.

The optional block after generate is an output schema, not ordinary object construction.

Examples, tutorials, and recipes

  • examples/ contains minimal examples. Each file demonstrates one language feature or agent pattern.
  • tutorials/ contains longer walkthrough programs for learning multi-step agent patterns end to end.
  • recipes/ contains practical workflows you can copy and adapt, such as repo review, code review, changelog drafting, file summarization, document translation, API extraction, and research briefs.

Start with examples/structured-generate.as to learn the syntax, examples/arithmetic.as for operators, examples/plan-execute.as for parallel for, read tutorials/ for pattern walkthroughs, then use recipes/repo-review.as when you want a realistic, auditable repository workflow.

recipes/repo-review.as shows the core difference: tool results are not automatically prompt context. The recipe explicitly selects only the file tree, TODO/FIXME findings, package metadata, and CI configuration before asking for structured release-readiness output:

use "file tree"          budget=8k
use "todo findings"      budget=4k
use "package metadata"   budget=4k
use "ci configuration"   budget=4k
generate                 blockers, risks, quick_wins, next_steps

What problem it solves

LLMs are stateless by nature. Each call is a fresh start. To give an agent continuity of thought, every input must be carefully assembled — what researchers and practitioners call context engineering.

After building agents with Python and TypeScript, the author kept running into the same problem: prompt context management. What data actually reaches the LLM? Where does one agent's context end and another's begin? How do you audit what the model saw?

What makes AgentScript different?

AgentScript is not:

  • a prompt template
  • a YAML config format
  • a general-purpose agent framework

It is a small language for one thing:

making LLM prompt context explicit, scoped, typed, traceable, and compilable.

It gives you two things that general-purpose languages don't: a first-class use keyword that declares which data enters the LLM prompt and what role it plays via as label, and a first-class generate expression that defines an LLM call with an optional output contract. Everything else — variables, functions, agents, imports, loops — exists to support this core workflow. Scopes enforce context boundaries naturally: what's used in one function stays there; child scopes inherit but never leak upward. Functions can also return their final top-level expression directly, which keeps typical LLM workflows concise.

How it works

graph LR
    A[".as source"] --> B["Parser"]
    B --> C["AST"]
    C --> D["Semantic Analyzer"]
    D --> E["Runtime"]
    E --> F["LLM Provider<br/>(OpenAI / Anthropic / Ollama)"]
    E --> G["Tools<br/>(Find / Grep / File / HTTP / ...)"]
    E --> H["Memory<br/>(JSONL / SQLite)"]
    E --> I["Trace Output"]

Status

AgentScript is experimental.

Currently implemented:

  • parser
  • semantic checker
  • mock runtime
  • OpenAI / Anthropic / Ollama LLM adapters
  • file and environment tools
  • JSONL and SQLite memory backends
  • trace output

Planned:

  • stable IR
  • richer diagnostics
  • VS Code syntax support
  • package publishing hardening

Agent patterns as composable primitives

AgentScript doesn't hardcode agent patterns as keywords. You compose them from the same primitives:

| Pattern | Tutorial | What it demonstrates | |---------|----------|---------------------| | ReAct | tutorials/react.as | Reason → Act → Observe loop with explicit context | | Plan-and-Execute | tutorials/plan-execute.as | Generate plan, execute steps, verify, re-plan on failure | | Reflection / Self-Improvement | tutorials/self-improve.as | Query past lessons → generate → reflect → persist new lessons | | Multi-Agent | tutorials/plan-execute.as | Independent agents with isolated context boundaries |

Every pattern is explicit — which data enters the prompt, which tools each agent can use, and which output shape each LLM call must satisfy when one is declared.

Language at a glance

import llm Qwen from "ollama://localhost:11434/qwen3.6"
import tool Search from "mcp://tools/search"
import memory Lessons from "file://./.agentscript/lessons.jsonl"

main agent ResearchAgent {
    model Qwen
    role "Senior Researcher"
    description "Answer questions with search and structured reasoning."

    main func(input {
        question string
    }) {
        use input.question as "user question"

        scratch = []
        use scratch.summary max 2k as observations

        done = false
        loop until done max 6 {
            thought = reason(input.question, scratch)
            obs = Search.search(thought.focus)
            scratch.add(obs)
            done = enough(input.question, scratch)
        }

        answer(input.question, scratch)
    }

    func answer(question, scratch) {
        use question as "user question"
        use scratch.summary max 2k as observations
        generate({
            input: "Answer using only the observations"
        }) -> {
            ok boolean
            text string
            error string
        }
    }
}

Key ideas

  1. use is explicit context — nothing enters the LLM prompt unless used; as label names the context section
  2. generate is the only LLM call site — with a required input instruction and optional output shape
  3. Final expression return keeps flows concise — a function returns its final top-level expression
  4. Scope is context boundary — functions, agents, and blocks isolate prompt visibility
  5. Tools, memory, and files are imported resources — with auditable access
  6. Trace is built in — every generate and use is recorded for debugging

Why not just Python or TypeScript?

| | Python / TypeScript | AgentScript | |---|---|---| | Context management | Implicit (string concatenation, array append) | Explicit (use declaration, optional as label) | | LLM call site | Anywhere in the code | One generate expression | | Context isolation | Manual discipline | Scope-inherited, auto-isolated | | Trace / audit | External tooling needed | Built-in, per-call |

Python and TypeScript are excellent general-purpose tools, but they have no concept of "prompt context" as a language primitive. Every agent project reinvents the same patterns. AgentScript bakes them in.

CLI

agentscript run recipes/code-review.as --input '{"path":"src"}'
agentscript run recipes/code-review.as --input '{"path":"src"}' --mock
agentscript run recipes/code-review.as --input '{"path":"src"}' --dry-run
agentscript run recipes/code-review.as --input '{"path":"src"}' --trace
agentscript recipes/code-review.as --check
agentscript examples/arithmetic.as --parse
agentscript run recipes/code-review.as --quiet

| Option | Description | |--------|-------------| | --input '<json>' | JSON input for the entry function | | --input-file <path> | Read input from a JSON file | | --agent <name> | Select a specific entry agent | | --function <name> | Select a specific entry function | | --check | Parse + semantic analysis (no execution) | | --parse | Parse and output AST as JSON | | --mock | Use deterministic mock providers instead of real model calls | | --dry-run | Build prompts and trace without model calls | | --concurrency <n> | Set the runtime concurrency limit for parallel for | | --trace <file> | Write execution trace to file | | --trace | Print human-readable trace | | --trace pretty | Backward-compatible alias for --trace | | --verbose | Print detailed trace | | --quiet | Output only the final value |

Documentation

| Language | Links | |----------|-------| | English | Language Reference · Context Engineering · use ... as ... · generate · parallel for · Final Expression Return · Design History | | 中文 | README-CN · 语言参考 · Context Engineering · use ... as ... · generate · parallel for · Final Expression Return |

Design principles

  • Context is explicit: ordinary variables, tool results, memory records, and trace events never enter prompts unless selected with use.
  • Scope controls variable lifetime, context inheritance, and prompt exposure.
  • LLM, tool, file, agent, and memory imports are runtime capabilities with explicit boundaries.
  • Pattern names such as planner, executor, verifier, reflect, improve, and evolve are ordinary identifiers.
  • Trace is for debugging and audit; it is not prompt context.

Contributing

See CONTRIBUTING.md.

Development

npm run typecheck
npm test
npm run build

Zero runtime dependencies. Built with TypeScript.

License

MIT