npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

nixagent

v1.0.0

Published

Unix pipe semantics as LLM agent tool interface — any CLI tool becomes an agent tool automatically

Downloads

90

Readme

nixagent

Unix pipe semantics as LLM agent tool interface.

npm install nixagent

The insight

"Text-based CLIs beat structured tool calling for AI agents all day — because Unix commands appear in training data going back to the 1970s. Text is the native language of both the command line AND the LLM."

— Manus ex-backend lead (93K impressions)

The problem with structured tool calling:

// You have to define every tool manually:
const tools = [
  { name: "read_file", parameters: { path: "string" } },
  { name: "search_code", parameters: { pattern: "string", dir: "string" } },
  { name: "list_files", parameters: { dir: "string", filter: "string" } },
  // ... forever
];

The Unix insight:

The LLM already knows how to use every CLI tool ever written. It's been trained on decades of shell scripts, man pages, and Stack Overflow answers. Just give it a shell:

// The LLM writes:
cat /path/to/file
grep -r "authenticate" ./src --include="*.ts" -l
find . -name "*.json" -newer package.json | head -20
curl -s https://api.github.com/repos/owner/repo | jq '.stargazers_count'

nixagent provides the sandboxed execution layer so you can do this safely.


Quick start

import NixAgent, { sh } from 'nixagent';

// 1. sh`` template tag — instant sandboxed shell
const result = sh`cat package.json | jq '.dependencies | keys[]'`;
console.log(result.stdout);

// 2. NixAgent — drop into any OpenAI-compatible agent loop
const agent = new NixAgent();

// Pass agent.tool to your LLM as the tools array
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: agent.systemPrompt },
    { role: 'user', content: 'What TypeScript files import from react?' }
  ],
  tools: [agent.tool],
});

// When the LLM calls the shell tool:
if (response.choices[0].message.tool_calls) {
  const call = response.choices[0].message.tool_calls[0];
  const args = JSON.parse(call.function.arguments);
  const result = agent.handleToolCall(args);
  // Returns: grep -r "from 'react'" ./src --include="*.ts" -l
  // Result: src/App.tsx\nsrc/components/Button.tsx\n...
}

API

sh\command`` — template tag

Run a single sandboxed command. Values are shell-escaped automatically.

import { sh } from 'nixagent';

const file = 'package.json';
const result = sh`cat ${file} | jq '.name'`;
// → { stdout: '"myapp"', stderr: '', exitCode: 0, durationMs: 12 }

pipe(sandbox, ...commands) — pipeline composer

Validate and compose multiple commands into a pipe.

import { pipe, Sandbox } from 'nixagent';

const sandbox = new Sandbox();
const result = pipe(sandbox,
  'cat server.log',
  'grep "ERROR"',
  'tail -20'
);

new Sandbox(opts) — low-level executor

import { Sandbox } from 'nixagent';

const sandbox = new Sandbox({
  allowlist: ['cat', 'grep', 'jq'],  // only these commands
  allowNetwork: false,                // no curl/wget
  allowWrites: false,                 // no file writes
  maxOutputBytes: 64 * 1024,         // 64KB output limit
  timeoutMs: 10_000,                 // 10s timeout
  cwd: '/path/to/project',           // working directory
});

const result = sandbox.exec('cat README.md');

new NixAgent(opts) — full agent integration

import NixAgent from 'nixagent';

const agent = new NixAgent({
  allowNetwork: true,   // enable curl/wget
  allowWrites: false,   // keep writes disabled
  timeoutMs: 30_000,    // longer timeout
  systemPrompt: 'You are a code analysis assistant.',
});

agent.tool          // OpenAI-compatible tool definition
agent.mcpTool       // MCP tool definition
agent.systemPrompt  // Inject into LLM system prompt
agent.handleToolCall({ command: 'grep -r "TODO" ./src' })
agent.validate('rm -rf /')  // → { ok: false, reason: "Denied by pattern..." }

Safety model

Default allowlist (50+ commands): cat, grep, find, ls, jq, git, awk, sed, sort, uniq, curl*, wget*, etc.

*Network commands (curl, wget) require allowNetwork: true.

Always denied (regardless of allowlist):

  • rm -rf patterns
  • sudo, su
  • Background jobs (&, nohup)
  • kill, pkill
  • Writing to block devices
  • curl | sh / wget | sh (supply chain attack patterns)
  • reboot, shutdown, mkfs, fdisk

Output limits: 64KB stdout, 4KB stderr, 10s timeout by default.


With Claude Code (MCP)

// In your MCP server:
import NixAgent from 'nixagent';
import { Server } from '@modelcontextprotocol/sdk/server/index.js';

const agent = new NixAgent({ allowNetwork: true });
const server = new Server({ name: 'shell', version: '1.0.0' },
  { capabilities: { tools: {} } }
);

server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [agent.mcpTool]
}));

server.setRequestHandler(CallToolRequestSchema, async (req) => {
  const result = agent.handleToolCall(req.params.arguments as { command: string });
  return { content: [{ type: 'text', text: agent.formatResult(result) }] };
});

With LangChain

import { DynamicTool } from 'langchain/tools';
import NixAgent from 'nixagent';

const agent = new NixAgent();

const shellTool = new DynamicTool({
  name: 'shell',
  description: agent.tool.function.description,
  func: async (command: string) => {
    const result = agent.handleToolCall({ command });
    return agent.formatResult(result);
  },
});

Why this works

Unix tools have two things that custom tool schemas don't:

  1. 50 years of training data — every man page, tutorial, blog post, and Stack Overflow answer the LLM was trained on includes shell commands. The LLM is already fluent.

  2. Composition is freegrep | sort | uniq | wc -l is a new "tool" the LLM invented on the fly. With structured tool calling you'd need to define that capability explicitly.

The LLM writes better shell than most engineers write tool schemas.


License

MIT