npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@jancellor/ask

v1.3.0

Published

A minimal coding agent CLI.

Readme

Ask

A minimal AI agent.

Ask screenshot

Introduction

Ask is a minimal but fully functional AI agent for coding and general tasks. It has just one tool, the shell execution tool.

  • Code editing is via shell execution and tools like cat, sd, rg and fd.
  • Supports AGENTS.md and agent skills in .agents directories.
  • Batch mode is first class, eg cat code.js | ask "explain this".
  • Saves sessions to ~/.ask/sessions and supports resuming/forking.
  • The agent knows where sessions are stored and what the format is.
  • Background tasks just use & and system prompt guidance (.pid files, output redirection).
  • Subagent patterns are therefore just self-invocation, using background processes and reflective session inspection where appropriate.
  • Interactive mode (TUI) is basic but pretty-prints markdown and code.
  • Config system supports providers, models, and variants and passes merged JSON options directly to AI SDK.
  • Runs with full access – needs an external sandbox.

Setup

Install from npm:

npm install -g @jancellor/ask

Or build from source:

git clone https://github.com/jancellor/ask.git
cd ask
npm install
npm run build
npm link

Configure a provider:

ask --config --provider anthropic --model claude-opus-4-6

That saves the current config and creates ~/.config/ask/config.json, eg:

{
  "currentProvider": "anthropic",
  "providers": {
    "anthropic": {
      "currentModel": "claude-opus-4-6"
    }
  }
}

Store secrets separately in ~/.config/ask/config.secrets.json:

{
  "anthropic": {
    "apiKey": "your-api-key"
  }
}

For OpenAI-compatible endpoints, configure the provider explicitly:

{
  "currentProvider": "openrouter",
  "providers": {
    "openrouter": {
      "sdkProvider": "openai-compatible",
      "providerOptions": {
        "name": "openrouter",
        "baseURL": "https://openrouter.ai/api/v1"
      },
      "currentModel": "anthropic/claude-sonnet-4.6"
    }
  }
}

Ask uses the AI SDK, and this config is designed to map directly onto that runtime model. You select a configured provider, then a model within that provider, and optionally a variant within that model. A configured provider can also override sdkProvider so one named config entry can target a different SDK provider family, such as openai-compatible. providerOptions live on the provider and are passed to the SDK provider factory, while auth stays separate in config.secrets.json. generateOptions can be set globally and at the provider, model, and variant levels; at runtime they are merged in that order and passed through as the options object for generateText.

For example, you can add a Claude reasoning-effort variant with per-variant Anthropic options:

{
  "currentProvider": "anthropic",
  "providers": {
    "anthropic": {
      "currentModel": "claude-opus-4-6",
      "models": {
        "claude-opus-4-6": {
          "currentVariant": "balanced",
          "variants": {
            "balanced": {
              "generateOptions": {
                "providerOptions": {
                  "anthropic": {
                    "effort": "medium"
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}

Then use ask -v balanced to select that variant for a run.

Use ask -c to print the resolved config. If you pass -p, -m, or -v with -c, those values are saved as the new current selection. Without -c, they apply only to the current run.

Run:

ask -c                       # Show current resolved config
ask -c -p openai -m gpt-5    # Update saved provider/model
ask -c -v                    # Clear the saved variant
ask                          # Interactive mode
ask "refactor"               # Batch mode (single positional arg)
cat file.ts | ask "explain"  # Pipe context, ask a question
ask --resume                 # Resume most recent session (interactive)
ask --resume <uuid>          # Resume a specific session
ask --resume -- "refactor"   # Resume most recent session in batch mode
ask --fork                   # Fork most recent session into a new session (interactive)
ask --fork -- "try this"     # Fork most recent session in batch mode
ask --resume <uuid> --fork   # Fork a specific session into a new session
ask --help                   # More options

Library use:

import { ask, Agent } from '@jancellor/ask';

const text = await ask('Refactor this function');

const agent = await Agent.create({});
await agent.ask('message');
console.log(agent.messages);

Architecture

User input
  → generateText() via Vercel AI SDK
    → Model returns text + tool calls
      → execute({ command }) — bash -c with timeout
      → stdout/stderr/exit code returned to model
    → Loop until no more tool calls
  → Append messages to session JSONL