npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

lint-cli-ai

v1.0.0

Published

cli

Readme

lint-cli

AI-powered command-line assistant for local development (published on npm as lint-cli-ai). It connects to a local OpenWebUI / Ollama-compatible API and can read, write, and list files in your current project, acting like a coding partner that lives inside your terminal.

Features

  • Interactive REPL-style CLI (lint-cli (model) > prompt)
  • Talks to a local model via HTTP
  • Can read, write, and list files in your project (with safe path checks)
  • Can search text, replace in files, and run commands (with confirmation)
  • Persists conversation state and context between runs
  • Provides a lightweight developer workflow similar to Codex CLI / Gemini CLI

Requirements

  • Node.js 18+ (ES modules, node: imports)
  • A running HTTP API compatible with OpenWebUI / Ollama.
    • Default: http://localhost:11434/ollama/api/chat
    • Override with OLLAMA_API or OPENWEBUI_API

Optionally, you can provide an API key via OLLAMA_API_KEY. When present, it is sent as:

Authorization: Bearer <OLLAMA_API_KEY>

Installation

Clone this repository and install dependencies:

npm install

You can run the CLI locally without installing it globally:

npm start
# or
node bin/lint-cli.js

If you want to install it globally from this repository (so you can run lint-cli from anywhere), from the project root:

npm install -g .

Or install it globally from npm (package name lint-cli-ai):

npm install -g lint-cli-ai

This exposes two commands:

  • lint-cli
  • lc

Both point to bin/lint-cli.js.

Configuration

Create a .env file in the project where you run the CLI (not necessarily in this repo) to override defaults:

OLLAMA_API=http://localhost:11434
OLLAMA_MODEL=qwen3:8b
OLLAMA_API_KEY=your-api-key-if-needed

If OLLAMA_API is not set, the CLI falls back to http://localhost:11434/ollama/api/chat. You can also set OPENWEBUI_API if you prefer that name.

Usage

From a project directory where you want the assistant to work:

lint-cli
# or
lc

You will see a prompt like:

lint-cli (qwen3:8b) >

Then you can type natural-language instructions, for example:

  • "create a basic index.html landing page"
  • "add a unit test for function X in file Y"
  • "refactor this module to use async/await"

The assistant can:

  • Read files in the current project
  • Write / overwrite files (after asking for confirmation in the CLI)
  • List files and directories
  • Search text across files
  • Replace text in files
  • Run commands (after asking for confirmation in the CLI)

Commands (slash or colon prefix):

  • /help to show available commands
  • /ls [path] to list files
  • /pwd to show the current directory
  • /model [name] to show or set model
  • /api [url] to show or set API base or full chat URL
  • /system [text|reset] to show or set system prompt override
  • /memory [on|off|clear|path] to manage memory
  • /set k=v ... to set multiple settings (model, api, memory)
  • /search <pattern> [path] to search text in files
  • /run <command> to run a shell command (with confirmation)
  • /clear to clear the screen
  • /exit to quit the CLI

State and memory

The CLI stores conversation history and context in a project-specific directory:

  • A hidden folder named .lint-cli/ is created in the directory where you run the command.
  • Inside it, a memory.json file keeps the message history so the assistant can preserve context between runs.
  • A config.json file stores local settings (model, API URL, memory toggle, system prompt override).

You can safely delete .lint-cli/ if you want to reset the assistant's memory for that project.

Project structure

  • bin/lint-cli.js - main entry point for the CLI (REPL loop, tool execution, spinner, memory handling)
  • src/core/openwebui.js - HTTP client to the OpenWebUI / Ollama-compatible API
  • src/core/system.js - system prompt that defines the assistant's behavior inside the CLI
  • src/core/tools.js - implementations of read_file, write_file, list_files, and current_dir
  • src/core/tool.schema.js - tool schema definitions passed to the model
  • .lint-cli/ (runtime, per-project) - persisted conversation state (not committed)

License

ISC

Roadmap

See ROADMAP.md for pending items and possible future features.

Pendientes

  • Modo batch por stdin para automatizar comandos
  • Flags de aprobacion (por ejemplo --yes) para ejecuciones no interactivas
  • Streaming de respuestas del modelo
  • Resumen de memoria y recorte inteligente de historial
  • Modo read-only para bloquear escrituras/ejecucion de comandos
  • Tests basicos para comandos internos y herramientas
  • Reintentos con backoff y mejores mensajes de error HTTP
  • Limites por herramienta (timeout, max output) y protecciones de contexto
  • Vista previa de diffs antes de aplicar cambios grandes
  • Perfilado de modelos (fast/balanced/quality) y presets de temperatura/top-p
  • Ignorar rutas por configuracion (archivo tipo .lintcliignore)