npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

codesage-ai

v0.2.3

Published

Migarates legacy code and Analyzes code quality of React, Vue, and Angular projects using LLM.

Readme

CodeSage-Ai

A code quality LLM analyzer and migration helper

Version: 0.2.3

Introduction (Experimental)

Analyze the code quality of your React, Vue, and Angular projects using the power of Large Language Models (LLMs) for insightful suggestions, alongside planned integration for traditional linters.

This tool is currently experimental. LLM-based code analysis is an evolving field. Use the suggestions as a helpful guide and an augmentation to your existing code review processes, not as an infallible authority.

Features

  • LLM-Powered Code Analysis: Leverages a local LLM (via Ollama) to provide nuanced suggestions on:

    • Potential bugs and anti-patterns.
    • Code smells and areas for refactoring.
    • Readability and maintainability improvements.
    • Framework-specific best practices for React, Vue, and Angular.
  • Framework Support: Identifies and analyzes files for:

    • React (.jsx, .tsx)
    • Angular (.ts component/service/module files, .html component templates)
  • LLM-Powered Code Migration: Leverages a local LLM (via Ollama) to Migrate Code from one framework to another (eg. react to vue, vue to react)

  • Framework Support For Migration:

    • React
    • Vue
  • File Traversal: Automatically discovers relevant source files in your project.

  • Write Report: Saves the report in a file with the same name but with a .md extension.

  • Customizable LLM Model: Specify which Ollama model to use.

  • Caching: Speed up analysis for unchanged files.

  • Documentation: generating documentation defining component usage.

(Latest Fix)

  • Prompt: Refined prompt further resulting in minimise hallucination.

(Planned Features)

  • Traditional Linter Integration: Combine LLM insights with reports from ESlint, Stylelint, and framework-specific linters.
  • Vue Support: Supporting .vue files.
  • HTML/Markdown Report Generation: Output reports in more shareable formats.
  • Configuration File: Allow detailed customization of rules, paths, and models.

Prerequisites

  1. Node.js: Version 18.0.0 or higher.

  2. Ollama: You must have Ollama installed and running on your system.

  3. Ollama LLM Model: You need to have a suitable language model pulled into Ollama. Recommended models for code analysis:

    • codegemma:7b (default)
    • codellama:13b-instruct (or other Code Llama variants like 7b, 34b)
    • llama3:8b (or other Llama 3 variants)
    • mistral or mixtral variants

    You can pull a model using the Ollama CLI, for example:

    ollama serve
    ollama pull codegemma:7b
    ollama run codegemma:7b

    Ensure the Ollama application is running before using this analyzer.

Installation

You can install the package locally as a development dependency in your project :

npm install --save-dev codesage-ai
# or
yarn add --dev codesage-ai

Alternatively, for global use (less common for project-specific tools):

npm install -g codesage-ai

Usage

Once installed, you can run the analyzer from the root of your project directory.

Using npx (if installed locally ):

npx analyze-code [path_to_project]

for Migration

npx migrate-code [path_to_project] [source_framework] [target_framework] [optional_goals]

eg.

npx migrate-code . react vue 

for Documentation

npx generate-doc [path_to_project]

If [path_to_project] is omitted, it defaults to the current directory (.).

Examples:

Analyze the current project:

npx analyze-code .

Analyze a specific sub-directory:

npx analyze-code ./src/app

for Documentation

npx generate-doc ./src/app

Specifying the LLM Model:

You can specify which Ollama model to use via the OLLAMA_MODEL environment variable. If not set, it defaults to codellama:13b-instruct.

OLLAMA_MODEL=llama3:8b npx analyze-code .
# or for a larger model (will be slower, requires more resources)
OLLAMA_MODEL=codellama:34b-instruct npx analyze-code .

Adding to npm scripts (Recommended for local project usage):

Edit your project's package.json:

{
  "scripts": {
    "analyze:quality": "analyze-code .",
    "analyze:quality:deep": "OLLAMA_MODEL=codegemma:7b analyze-code ./src"
  }
}

Then run:

npm run analyze:quality
# or
npm run analyze:quality:deep

If installed globally:

analyze-code [path_to_project]
OLLAMA_MODEL=llama3:8b analyze-code .

How it Works

File Discovery: The tool scans your project directory for relevant source files based on common extensions for React, Vue, and Angular.

LLM Prompting: For each identified file, its content is sent to the configured Ollama LLM with a carefully crafted prompt asking for code quality analysis, potential issues, and suggestions.

Report Generation: The LLM's responses are collected and presented in a consolidated report in your console.

Contributing

This project is in its early stages, and contributions are welcome! If you have ideas, bug reports, or want to contribute code, please feel free to open an issue or a pull request on the GitHub repository. (Replace with your actual GitHub link once created).

Areas for contribution:

Improving prompts for different frameworks and analysis types.

Integrating standard linting tools (ESLint, etc.).

Adding more sophisticated file filtering and framework detection.

Developing better report formats (HTML, Markdown).

Adding configuration options.

Writing tests.

Disclaimer

The suggestions provided by the LLM are based on its training data and the prompt provided. They may not always be perfect, optimal, or cover all possible issues.

Always use your own judgment and conduct thorough code reviews. This tool is an assistant, not a replacement for human expertise.

Ensure your local Ollama setup and the models you use are from trusted sources.

Analyzing very large files or entire large projects can be resource-intensive and time-consuming due to LLM processing.

License

MIT License