npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

agentic-code

v0.6.5

Published

Task-oriented context engineering framework for LLM coding agents - AGENTS.md standard compliant

Readme

Agentic Code

Your AI (LLM), guided by built-in workflows. Describe what you want, and it follows a professional development process.

MIT License AGENTS.md Version

Demo: Building a Slack bot with Agentic Code

AI builds a Slack bot with tests & docs — in 30s

What You Get

You: "Build a Slack bot with Gemini API"
AI:  ✓ Reads AGENTS.md
     ✓ Analyzes requirements
     ✓ Plans architecture
     ✓ Writes tests first
     ✓ Implements with best practices
     ✓ Verifies everything works

Works out of the box—no configuration or learning curve required.

Using Claude Code with TypeScript?
Check out AI Coding Project Boilerplate - a specialized alternative optimized for that specific stack.

Quick Start (30 seconds)

npx agentic-code my-project && cd my-project
# Ready to go

That's it. Works with any AI tool - Codex, Cursor, Aider, or anything AGENTS.md compatible.

Why This Exists

Every AI coding tool has the same problems:

  • Forgets your project structure after 10 messages
  • Deletes tests when adding features
  • Ignores architectural decisions
  • Skips quality checks

We built the solution into the framework. AGENTS.md guides your AI through professional workflows automatically.

What Makes It Different

🎯 Zero Configuration

Pre-built workflows that work without setup.

🌐 Universal Compatibility

Works with any programming language and any AI tool that reads AGENTS.md.

Test-First by Default

Generates test skeletons before writing implementation code.

📈 Smart Scaling

  • Simple task → Direct execution
  • Complex feature → Full workflow with approvals

How It Actually Works

  1. AGENTS.md tells your AI the process - Like a README but for AI agents
  2. Progressive rule loading - Only loads what's needed, when needed
  3. Quality gates - Automatic checkpoints ensure consistent output
  4. You stay in control - Approval points for major decisions
.agents/
├── tasks/                   # What to build
│   ├── task-analysis.md     # Entry point - AI starts here
│   └── ...                  # Design, test, implement, QA tasks
├── workflows/               # How to build it
└── skills/                  # Quality standards (Codex-compatible)

Real Examples

Simple Task

You: "Add API endpoint for user search"
# AI: Reads existing code → Plans changes → Tests → Implements → Done

Complex Feature

You: "Build user authentication system"
# AI: Requirements → Design doc → Your approval → Test skeletons →
#     Implementation → Quality checks → Done

Installation Options

For New Projects

npx agentic-code my-project

For Existing Projects

# Copy the framework files
cp -r path/to/agentic-code/AGENTS.md .
cp -r path/to/agentic-code/.agents .

Skills

.agents/skills/ contains reusable skill files in the Codex Skills format. Each skill has a SKILL.md with instructions that AI agents can discover and apply.

Codex: Install skills for Codex CLI:

# User scope (all projects)
npx agentic-code skills --codex
# Installs to ~/.codex/skills/agentic-code/

# Project scope (current project only)
npx agentic-code skills --codex --project
# Installs to ./.codex/skills/agentic-code/

# Custom path
npx agentic-code skills --path ./custom/skills
# Installs to ./custom/skills/agentic-code/

Common Questions

Q: Can I use this with other AI coding tools besides Codex?
Yes! This framework works with any AGENTS.md-compatible tool like Cursor, Aider, and other LLM-assisted development environments.

Q: What programming languages are supported? The framework is language-agnostic and works with any programming language through general development principles. TypeScript-specific rules are available in skills/*/references/typescript.md.

Q: Do I need to learn a new syntax?
No. Describe what you want in plain language; the framework handles the rest.

Q: What if my AI doesn't support AGENTS.md?
Check if your tool is AGENTS.md compatible. If so, point it to the AGENTS.md file first.

Q: Can I customize the workflows?
Yes, everything in .agents/ is customizable. The defaults are production-ready, but you can adapt them to your team's process.

Q: What about my existing codebase?
It works with existing projects. Your AI analyzes the code and follows your established patterns.

The Technical Stuff

The framework has three pillars:

  1. Tasks - Define WHAT to build
  2. Workflows - Define HOW to build it
  3. Skills - Define quality STANDARDS

Progressive Skill Loading

Skills load based on task analysis:

  • Small (1-2 files) → Direct execution with minimal skills
  • Medium/Large (3+ files) → Structured workflow with design docs
  • Each task definition specifies its required skills

Quality Gates

Automatic checkpoints ensure:

  • Tests pass before proceeding
  • Code meets standards
  • Documentation stays updated

Special Features

  • Metacognition - AI self-assessment and error recovery
  • Plan Injection - Enforces all required steps are in work plan
  • Test Generation - Test skeletons from acceptance criteria
  • 1-Commit Principle - Each task = one atomic commit

Reviewing Generated Outputs

Important: Always review AI-generated outputs in a separate session.

LLMs cannot reliably review their own outputs within the same context. When the AI generates code or documents, it carries the same assumptions and blind spots into any "self-review." This leads to missed issues that a fresh perspective would catch.

Why Separate Sessions Matter

| Same Session | New Session | |--------------|-------------| | Shares context and assumptions | Fresh perspective, no prior bias | | May overlook own mistakes | Catches issues objectively | | "Confirmation bias" in review | Applies standards independently |

How to Use Review Tasks

After completing implementation or documentation, start a new session and request a review:

# For code review
You: "Review the implementation in src/auth/ against docs/design/auth-design.md"
# AI loads code-review task → Validates against Design Doc → Reports findings

# For document review
You: "Review docs/design/payment-design.md as a Design Doc"
# AI loads technical-document-review task → Checks structure and content → Reports gaps

# For test review
You: "Review the integration tests in tests/integration/auth.test.ts"
# AI loads integration-test-review task → Validates test quality → Reports issues

Available Review Tasks

| Task | Target | What It Checks | |------|--------|----------------| | code-review | Implementation files | Design Doc compliance, code quality, architecture | | technical-document-review | Design Docs, ADRs, PRDs | Structure, content quality, failure scenarios | | integration-test-review | Integration/E2E tests | Skeleton compliance, AAA structure, mock boundaries |

Pro tip: Make reviews part of your workflow. After any significant generation, switch sessions and review before merging.

For Cursor Users: Isolated Context Reviews via MCP

Cursor users can run reviews in isolated contexts without switching sessions using sub-agents-mcp. When review runs as a sub-agent, it executes in a completely separate context—achieving the same "fresh perspective" benefit as switching sessions, but without leaving your workflow.

Quick Setup:

Add to your MCP config (~/.cursor/mcp.json or .cursor/mcp.json):

{
  "mcpServers": {
    "sub-agents": {
      "command": "npx",
      "args": ["-y", "sub-agents-mcp"],
      "env": {
        "AGENTS_DIR": "/absolute/path/to/your/project/.agents/tasks",
        "AGENT_TYPE": "cursor"
      }
    }
  }
}

After restarting Cursor, task definitions become available as sub-agents:

You: "Use the code-review agent to review src/auth/ against docs/design/auth-design.md"

Start Building

npx agentic-code my-awesome-project
cd my-awesome-project
# Tell your AI what to build

Consistent, professional AI-assisted development.


Contributing

Found a bug? Want to add language-specific rules? PRs welcome!

License

MIT - Use it however you want.


Built on the AGENTS.md standard — an open community specification for AI coding agents.

Ready to code properly with AI? npx agentic-code my-project