npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@iamharshil/cortex

v6.0.2

Published

The ultimate local AI coding agent - context-aware, memory-powered, MCP-enabled

Readme

Cortex

The ultimate local AI coding agent - context-aware, memory-powered, MCP-enabled

npm version License Node


What is Cortex?

Cortex is an autonomous AI coding agent that runs entirely locally on your machine. It combines the best features from Claude Code, OpenCode, PI, and GitHub Copilot while prioritizing token optimization and hardness.

┌──────────────────────────────────────────────────┐
│  $ cortex run "build a todo app"                │
│                                                  │
│  🔍 Analyzing project...                        │
│  📝 Creating plan...                            │
│  ✨ cortex/src/index.ts                          │
│  ✨ cortex/src/agent/engine.ts                   │
│                                                  │
│  Your code stays local. Always.                 │
└──────────────────────────────────────────────────┘

Key Features

  • 🔒 Privacy-first — All inference runs locally, no data leaves your machine
  • 🧠 Smart Context — Token-optimized context management with auto-compaction
  • 💾 Persistent Memory — Project (CORTEX.md) and user memory across sessions
  • 🔌 MCP Support — Connect to Model Context Protocol servers
  • 🎯 Plan Mode — Supervised autonomy with plan previews
  • 🔧 7 Primitives — Minimal tools: Read, Write, Edit, Bash, Glob, Grep, TodoWrite

Getting Started

Prerequisites

| Requirement | Description | | -------------------------------------------------------------------- | ------------------ | | Node.js ≥ 20 | JavaScript runtime | | LM Studio or Ollama | Local model server |

Install

npm install -g @iamharshil/cortex

Initialize

cortex init

This creates:

  • .cortex/config.json — Provider configuration
  • .cortex/mcp.json — MCP server configuration
  • CORTEX.md — Project memory file

Usage

Run a Task

# Simple task
cortex run "create a hello world function"

# With specific provider
cortex run "refactor auth" --provider ollama --model llama3.2

# Plan mode - shows plan before executing
cortex run "migrate to typescript" --plan-mode

Interactive Chat

cortex chat

Check Status

cortex status
cortex models --provider ollama

Configuration

cortex setup --provider ollama --model llama3.2

Configuration

Cortex stores configuration in:

| Platform | Path | | -------- | ------------------- | | macOS | ~/.cortex/ | | Linux | ~/.cortex/ | | Windows | %APPDATA%\cortex\ |

config.json

{
  "provider": "ollama",
  "model": "llama3.2",
  "url": "http://localhost:11434"
}

MCP Configuration

{
  "servers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "./"]
    }
  }
}

Memory System

Project Memory (CORTEX.md)

Create a CORTEX.md file in your project root:

# Cortex Project Memory

## Project Overview

- React-based todo app
- TypeScript, Vite

## Coding Conventions

- Functional components
- CSS modules for styling

## Testing

- Vitest for unit tests

User Memory (~/.cortex/memory.md)

Global preferences and context loaded for all projects.


Token Optimization

Cortex is designed for minimum token usage:

| Component | Target | | ------------------ | ----------------- | | System Prompt | ~3K tokens | | Tool Definitions | ~5K tokens (lazy) | | Project Memory | ~5K tokens | | Available for Work | 180K+ tokens |

Auto-Compaction

When context reaches 95%, Cortex automatically:

  1. Summarizes conversation history
  2. Preserves key information (file paths, conclusions)
  3. Clears old tool outputs

Manual Control

# Check token usage
/context

# Manual compaction
/compact preserve file paths and current task

Providers

Local (Default)

| Provider | Default Port | URL | | --------- | ------------ | -------------------------- | | Ollama | 11434 | http://localhost:11434 | | LM Studio | 1234 | http://localhost:1234/v1 |

Cloud (Future)

  • OpenRouter (Coming soon)
  • Anthropic (Coming soon)
  • OpenAI (Coming soon)
  • Gemini (Coming soon)

Development

# Clone and setup
git clone https://github.com/iamharshil/cortex.git
cd cortex
npm install

# Development
npm run dev

# Build
npm run build

# Test
npm test
npm run lint
npm run typecheck

Architecture

Cortex follows the "Less Scaffolding, More Model" philosophy (inspired by PI):

  1. Minimal Primitives — Only 7 core tools, trust the model to orchestrate
  2. Token Efficiency — Lazy loading, smart truncation, auto-compaction
  3. Local First — Privacy, no cloud dependencies
  4. MCP Extensible — Connect to any external service

License

MIT © Harshil