npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

lazyshell

v1.0.12-16

Published

AI CLI that generates shell commands

Downloads

129

Readme

LazyShell

npm version License: MIT CI Evals AI Powered NPM Downloads

A smart CLI tool that generates and executes shell commands using AI

LazyShell is a command-line interface that helps you quickly generate and execute shell commands using AI. It supports multiple AI providers and provides an interactive configuration system for easy setup.

Features ✨

  • 🔍 Generates shell commands from natural language descriptions
  • ⚡ Supports multiple AI providers (Groq, Google Gemini, OpenRouter, Anthropic, OpenAI, Ollama, Mistral)
  • 🔧 Interactive configuration system - no manual environment setup needed
  • 🔒 Safe execution with confirmation prompt
  • 🚀 Fast and lightweight
  • 🔄 Automatic fallback to environment variables
  • 💾 Persistent configuration storage
  • 📋 Automatic clipboard integration - generated commands are copied to clipboard
  • 🧪 Built-in evaluation system for testing AI performance
  • 🏆 Model benchmarking capabilities
  • 🤖 LLM Judge evaluation system
  • ⚙️ CI/CD integration with automated quality checks
  • 🖥️ System-aware command generation - detects OS, distro, and package manager
  • 🔄 Command refinement - iteratively improve commands with AI feedback

Installation 📦

Using npm

npm install -g lazyshell

Using yarn

yarn global add lazyshell

Using pnpm

pnpm add -g lazyshell

Using bun (recommended)

bun add -g lazyshell

Using Install Script (experimental)

curl -fsSL https://raw.githubusercontent.com/bernoussama/lazyshell/main/install | bash

Quick Start 🚀

  1. First Run: LazyShell will automatically prompt you to select an AI provider and enter your API key:

    lazyshell "find all files larger than 100MB"
    # or use the short alias
    lsh "find all files larger than 100MB"
  2. Interactive Setup: Choose from supported providers:

    • Groq - Fast LLaMA models with great performance
    • Google Gemini - Google's latest AI models
    • OpenRouter - Access to multiple models including free options
    • Anthropic Claude - Powerful reasoning capabilities
    • OpenAI - GPT models including GPT-4
    • Ollama - Local models (no API key required)
    • Mistral - Mistral AI models for code generation
    • LMStudio - Local models via LMStudio (experimental, no API key required)
  3. Automatic Configuration: Your preferences are saved to ~/.lazyshell/config.json and used for future runs.

  4. Clipboard Integration: Generated commands are automatically copied to your clipboard for easy pasting.

Configuration 🔧

Interactive Setup (Recommended)

On first run, LazyShell will guide you through:

  1. Selecting your preferred AI provider
  2. Entering your API key (if required)
  3. Automatically saving the configuration

Configuration Management

# Open configuration UI
lazyshell config

Manual Environment Variables (Optional)

You can still use environment variables as before:

export GROQ_API_KEY='your-api-key-here'
# OR
export GOOGLE_GENERATIVE_AI_API_KEY='your-api-key-here'
# OR
export OPENROUTER_API_KEY='your-api-key-here'
# OR
export ANTHROPIC_API_KEY='your-api-key-here'  
# OR
export OPENAI_API_KEY='your-api-key-here'

Note: Ollama and LMStudio don't require API keys as they run models locally.

Configuration File Location

  • Linux/macOS: ~/.lazyshell/config.json
  • Windows: %USERPROFILE%\.lazyshell\config.json

Supported AI Providers 🤖

| Provider | Models | API Key Required | Notes | |----------|--------|------------------|-------| | Groq | LLaMA 3.3 70B | Yes | Fast inference, excellent performance | | Google Gemini | Gemini 2.0 Flash Lite | Yes | Latest Google AI models | | OpenRouter | Multiple models | Yes | Includes free tier options | | Anthropic | Claude 3.5 Haiku | Yes | Advanced reasoning capabilities | | OpenAI | GPT-4o Mini | Yes | Industry standard models | | Ollama | Local models | No | Run models locally | | Mistral | Devstral Small | No | Code-optimized models | | LMStudio | Local models | No | Experimental - Local models via LMStudio |

Usage Examples 🚀

Basic Usage

lazyshell "your natural language command description"
# or use the short alias
lsh "your natural language command description"

Silent Mode

lazyshell -s "find all JavaScript files"  # No explanation, just the command
lsh --silent "show disk usage"            # Same with long flag

Examples

# Find files
lazyshell "find all JavaScript files modified in the last 7 days"

# System monitoring  
lazyshell "show disk usage sorted by size"

# Process management
lazyshell "find all running node processes"

# Docker operations
lazyshell "list all docker containers with their memory usage"

# File operations
lazyshell "compress all .log files in this directory"

# Package management (system-aware)
lazyshell "install docker"  # Uses apt/yum/pacman/etc based on your distro

Interactive Features

  • Execute: Run the generated command immediately
  • Refine: Modify your prompt to get a better command
  • Cancel: Exit without running anything
  • Clipboard: Commands are automatically copied for manual execution

System Intelligence 🧠

LazyShell automatically detects your system environment:

  • Operating System: Linux, macOS, Windows
  • Linux Distribution: Ubuntu, Fedora, Arch, etc.
  • Package Manager: apt, yum, dnf, pacman, zypper, etc.
  • Shell: bash, zsh, fish, etc.
  • Current Directory: Provides context for relative paths

This enables LazyShell to generate system-appropriate commands and suggest the right package manager for installations.

Evaluation System 🧪

LazyShell includes a flexible evaluation system for testing and benchmarking AI performance:

import { runEval, Levenshtein, LLMJudge, createLLMJudge } from './lib/eval';

await runEval("My Eval", {
  // Test data function
  data: async () => {
    return [{ input: "Hello", expected: "Hello World!" }];
  },
  // Task to perform  
  task: async (input) => {
    return input + " World!";
  },
  // Scoring methods
  scorers: [Levenshtein, LLMJudge],
});

Built-in Scorers

  • ExactMatch: Perfect string matching
  • Levenshtein: Edit distance similarity
  • Contains: Substring matching
  • LLMJudge: AI-powered quality evaluation
  • createLLMJudge: Custom AI judges with specific criteria

LLM Judge Features

  • AI-Powered Evaluation: Uses LLMs to evaluate command quality without expected outputs
  • Multiple Criteria: Quality, correctness, security, efficiency assessments
  • Rate Limiting: Built-in retry logic and exponential backoff
  • Configurable Models: Use different AI models for judging

Features

  • Generic TypeScript interfaces for any evaluation task
  • Multiple scoring methods per evaluation
  • Async support for LLM-based tasks
  • Detailed scoring reports with averages
  • Error handling for failed test cases

See docs/EVALUATION.md for complete documentation.

Model Benchmarking 🏆

LazyShell includes comprehensive benchmarking capabilities to compare AI model performance:

Running Benchmarks

# Build and run benchmarks
pnpm build
node dist/bench_models.mjs

Benchmark Features

  • Multi-Model Testing: Compare Groq, Gemini, Ollama, Mistral, and OpenRouter models
  • Performance Metrics: Response time, success rate, and output quality
  • Standardized Prompts: Consistent test cases across all models
  • JSON Reports: Detailed results saved to benchmark-results/ directory

Available Models

  • llama-3.3-70b-versatile (Groq)
  • gemini-2.0-flash-lite (Google)
  • devstral-small-2505 (Mistral)
  • ollama3.2 (Ollama)
  • or-devstral (OpenRouter)

CI Evaluations 🚦

LazyShell includes automated quality assessments that run in CI to ensure consistent performance:

Overview

  • Automated Testing: Runs on every PR and push to main/develop
  • Threshold-Based: Configurable quality thresholds that must be met
  • LLM Judges: Uses AI to evaluate command quality, correctness, security, and efficiency
  • GitHub Actions: Integrated with CI/CD pipeline

Quick Setup

  1. Add GROQ_API_KEY to your GitHub repository secrets
  2. Evaluations run automatically with 70% threshold by default
  3. CI fails if quality scores drop below the threshold

Local Testing

# Run CI evaluations locally
pnpm eval:ci

Custom Evaluation Scripts

# Run basic evaluations
pnpm build && node dist/lib/basic.eval.mjs

# Run LLM judge evaluation
pnpm build && node dist/lib/llm-judge.eval.mjs

# Test AI library
pnpm build && node dist/test-ai-lib.mjs

# Run example evaluations
pnpm build && node dist/lib/example.eval.mjs

See docs/CI_EVALUATIONS.md for complete setup and configuration guide.

Development 🛠️

Prerequisites

  • Node.js 18+
  • pnpm (recommended)

Setup

  1. Clone the repository:

    git clone https://github.com/bernoussama/lazyshell.git
    cd lazyshell
  2. Install dependencies:

    pnpm install
  3. Build the project:

    pnpm build
  4. Link the package for local development:

    pnpm link --global

Available Scripts

pnpm x                    # Quick run with jiti (development)
pnpm build               # Compile TypeScript with pkgroll
pnpm typecheck           # Type checking only
pnpm lint                # Check code formatting and linting
pnpm lint:fix            # Fix formatting and linting issues
pnpm eval:ci             # Run CI evaluations locally
pnpm release:patch       # Build, version bump, publish, and push
pnpm prerelease          # Build, prerelease version, publish, and push

Project Structure

src/
├── index.ts              # Main CLI entry point
├── utils.ts              # Utility functions (command execution, history)
├── bench_models.ts       # Model benchmarking script
├── test-ai-lib.ts        # AI library testing script
├── commands/
│   └── config.ts         # Configuration UI command
├── helpers/
│   ├── index.ts          # Helper exports
│   └── package-manager.ts # System package manager detection
└── lib/
    ├── ai.ts             # AI provider integrations and command generation
    ├── config.ts         # Configuration management
    ├── eval.ts           # Evaluation framework
    ├── basic.eval.ts     # Basic evaluation examples
    ├── ci-eval.ts        # CI evaluation script
    ├── example.eval.ts   # Example evaluation scenarios
    └── llm-judge.eval.ts # LLM judge evaluation examples

Development Features

  • TypeScript: Full type safety and modern JavaScript features
  • pkgroll: Modern bundling with tree-shaking
  • jiti: Fast development with TypeScript execution
  • Watch Mode: Auto-compilation during development
  • Modular Architecture: Clean separation of concerns
  • ESM: Modern ES modules throughout

Troubleshooting 🔧

Configuration Issues

  • Invalid configuration: Delete ~/.lazyshell/config.json to reset or use lazyshell config
  • API key errors: Run lazyshell config to re-enter your API key
  • Provider not working: Try switching to a different provider in the configuration

Environment Variables

LazyShell will automatically fall back to environment variables if the config file is invalid or incomplete.

Common Issues

  • Clipboard not working: Ensure your system supports clipboard operations
  • Model timeout: Some models (especially Ollama) may take longer to respond
  • Rate limiting: Built-in retry logic handles temporary rate limits
  • Command not found: Make sure the package is properly installed globally

Debug Mode

For troubleshooting, you can check:

  • Configuration file: ~/.lazyshell/config.json
  • System detection: The AI considers your OS, distro, and package manager
  • Command history: Generated commands are added to your shell history

Contributing 🤝

Contributions are welcome! Please feel free to submit a Pull Request.

Development Guidelines

  • Follow TypeScript best practices
  • Add tests for new features
  • Update documentation as needed
  • Run evaluations before submitting PRs
  • Use the KISS principle (Keep It Simple Stupid)
  • Follow GitHub flow (create feature branches)

License 📄

This project is licensed under the GPL-3.0 License - see the LICENSE file for details.

Acknowledgments