npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

gecode-beta

v0.1.2

Published

An interactive CLI AI coding assistant powered by Gemini.

Readme

gecode - AI CLI Coding Assistant

English

Project Overview

gecode is an interactive command-line interface (CLI) AI coding assistant designed to help developers with various coding tasks. Powered by large language models like Google Gemini and OpenAI, gecode can understand complex requests, execute shell commands, manage files, and even interact with Git to facilitate your development workflow. It operates with a holistic understanding of your project context, enabling more intelligent and efficient assistance.

Features

  • Interactive CLI: A user-friendly and responsive terminal interface built with Ink and React.
  • Multi-LLM Provider Support: Seamlessly switch between Google Gemini, OpenAI (including compatible APIs like Groq, Ollama), and custom LLM endpoints.
  • Context Awareness: Automatically builds a comprehensive project context including environment details, Git history, file structure, and relevant code snippets to provide accurate assistance.
  • Autonomous Task Execution:
    • Thought Process: Models articulate their thinking process before executing actions.
    • Agentic Mode: For complex tasks, gecode can decompose them into smaller, sequential steps and execute them iteratively.
    • Direct Mode: For simpler, context-resolvable tasks, it provides direct solutions.
  • Powerful Tool Integration:
    • Bash Execution: Run any shell command directly from the AI.
    • File System Operations: Read, create, and modify files.
    • Git Integration: View history, manage tracked files, and commit changes.
    • Persistent Memory: Utilize GEMINI.md as a scratchpad for long-term project notes and summaries.
  • Configurable: Easily set API keys, preferred models (thinkModel for complex reasoning, simpleModel for quick responses), and LLM providers via a simple command-line interface.
  • Request Logging: Logs all LLM interactions for traceability and debugging.

Installation

To install gecode, ensure you have Node.js (>=18.0.0) and npm installed.

  1. Clone the repository:
    git clone https://github.com/your-repo/gecode.git
    cd gecode
  2. Install dependencies:
    npm install
  3. Build the project:
    npm run build
  4. Run gecode:
    npm start
    Alternatively, you can link it globally for direct access:
    npm link
    gecode

Configuration

Before using gecode, you need to configure your LLM provider and API keys.

  1. Set your preferred provider: Choose between gemini, openai, or custom.

    gecode config set provider gemini
    # OR
    gecode config set provider openai
    # OR
    gecode config set provider custom
  2. Set API keys and endpoints:

    For Google Gemini:

    gecode config set gemini.apiKey YOUR_GEMINI_API_KEY

    For OpenAI (or compatible APIs like Groq, Ollama):

    gecode config set openai.apiKey YOUR_OPENAI_API_KEY
    # Optional: set a custom base URL for compatible APIs (e.g., Groq, local Ollama)
    gecode config set openai.baseUrl https://api.groq.com/openai/v1

    For Custom OpenAI-compatible Providers:

    gecode config set customLlm.apiKey YOUR_CUSTOM_API_KEY
    gecode config set customLlm.endpointUrl YOUR_CUSTOM_ENDPOINT_URL
  3. Customize models (optional): You can specify different models for "thinking" (complex tasks) and "simple" (quick responses).

    gecode config set thinkModel gemini-1.5-pro-latest
    gecode config set simpleModel gemini-1.5-flash-latest

    To view your current configuration:

    gecode config get

Usage

Once configured, simply run gecode to start the interactive CLI:

gecode

Inside the CLI:

  • Type your natural language requests, e.g., "Summarize the purpose of src/core/orchestrator.ts."
  • Use /conf to enter the configuration view.
  • Use /clear to clear the chat history.
  • Use /exit to quit the application.

Project Structure

gecode/
├── .vscode/                 # VSCode specific settings
├── src/
│   ├── cli/                 # Command Line Interface components (Ink/React)
│   │   ├── components/      # Reusable UI components
│   │   ├── index.ts         # CLI entry point
│   │   └── ui.tsx           # Main interactive UI logic
│   ├── config.ts            # Configuration management (Conf library)
│   ├── core/                # Core AI logic (orchestration, context building, prompts)
│   │   ├── contextBuilder.ts# Builds comprehensive project context for LLM
│   │   ├── modelDispatcher.ts# Decides between direct/agentic execution
│   │   ├── orchestrator.ts  # Main AI agent orchestrator
│   │   ├── prompts.ts       # AI system prompts
│   │   └── scheduler.ts     # Manages multi-file edits/tasks
│   ├── services/            # LLM provider integrations (Gemini, OpenAI, Custom)
│   │   ├── customLlmProvider.ts # Integration for custom OpenAI-compatible LLMs
│   │   ├── geminiProvider.ts# Integration for Google Gemini API
│   │   ├── index.ts         # LLM provider factory
│   │   ├── llmProvider.ts   # LLM provider interface
│   │   └── openaiProvider.ts# Integration for OpenAI API
│   ├── tools/               # AI callable tools (Bash, File, Git, List)
│   │   ├── bashTool.ts      # Executes bash commands
│   │   ├── fileTools.ts     # Reads and writes files
│   │   ├── gitTool.ts       # Performs git operations
│   │   ├── listTool.ts      # Lists directory contents
│   │   └── tool.ts          # Base class for all tools
│   └── utils/               # Utility functions (environment, git, logging)
├── package.json             # Project metadata and dependencies
├── tsconfig.json            # TypeScript configuration
├── README.md                # This file
├── LICENSE                  # Project license
└── GEMINI.md                # Persistent scratchpad/memory for gecode

License

This project is licensed under the MIT License. See the LICENSE file for details.

Contributing

Contributions are welcome! Please open an issue or submit a pull request.


简体中文

项目概述

gecode 是一个交互式命令行界面 (CLI) AI 编码助手,旨在帮助开发者完成各种编码任务。它由 Google Gemini 和 OpenAI 等大型语言模型提供支持,能够理解复杂的请求,执行 shell 命令,管理文件,甚至与 Git 交互,从而简化您的开发工作流程。它通过对您的项目上下文的整体理解来运作,从而提供更智能、更高效的帮助。

功能特点

  • 交互式 CLI: 基于 Ink 和 React 构建的用户友好且响应迅速的终端界面。
  • 多 LLM 提供商支持: 在 Google Gemini、OpenAI(包括 Groq、Ollama 等兼容 API)和自定义 LLM 端点之间无缝切换。
  • 上下文感知: 自动构建全面的项目上下文,包括环境详细信息、Git 历史记录、文件结构和相关代码片段,以提供准确的帮助。
  • 自主任务执行:
    • 思考过程: 模型在执行操作前会阐述其思考过程。
    • 代理模式 (Agentic Mode): 对于复杂任务,gecode 可以将其分解为更小、顺序的步骤并迭代执行。
    • 直接模式 (Direct Mode): 对于更简单、可通过上下文解决的任务,它提供直接的解决方案。
  • 强大的工具集成:
    • Bash 执行: 直接从 AI 运行任何 shell 命令。
    • 文件系统操作: 读取、创建和修改文件。
    • Git 集成: 查看历史记录、管理跟踪文件和提交更改。
    • 持久化内存: 使用 GEMINI.md 作为草稿本,用于长期项目笔记和摘要。
  • 可配置: 通过简单的命令行界面轻松设置 API 密钥、首选模型(用于复杂推理的 thinkModel 和用于快速响应的 simpleModel)以及 LLM 提供商。
  • 请求日志: 记录所有 LLM 交互,便于追溯和调试。

安装

要安装 gecode,请确保您已安装 Node.js (>=18.0.0) 和 npm。

  1. 克隆仓库:
    git clone https://github.com/your-repo/gecode.git
    cd gecode
  2. 安装依赖:
    npm install
  3. 构建项目:
    npm run build
  4. 运行 gecode:
    npm start
    或者,您可以全局链接它以便直接访问:
    npm link
    gecode

配置

在使用 gecode 之前,您需要配置您的 LLM 提供商和 API 密钥。

  1. 设置您偏好的提供商:geminiopenaicustom 之间选择。

    gecode config set provider gemini
    # 或
    gecode config set provider openai
    # 或
    gecode config set provider custom
  2. 设置 API 密钥和端点:

    对于 Google Gemini:

    gecode config set gemini.apiKey 您的_GEMINI_API_密钥

    对于 OpenAI(或 Groq、Ollama 等兼容 API):

    gecode config set openai.apiKey 您的_OPENAI_API_密钥
    # 可选:为兼容 API 设置自定义基础 URL(例如 Groq、本地 Ollama)
    gecode config set openai.baseUrl https://api.groq.com/openai/v1

    对于自定义 OpenAI 兼容提供商:

    gecode config set customLlm.apiKey 您的_自定义_API_密钥
    gecode config set customLlm.endpointUrl 您的_自定义_API_端点_URL
  3. 自定义模型(可选): 您可以为“思考”(复杂任务)和“简单”(快速响应)指定不同的模型。

    gecode config set thinkModel gemini-1.5-pro-latest
    gecode config set simpleModel gemini-1.5-flash-latest

    要查看您当前的配置:

    gecode config get

使用方法

配置完成后,只需运行 gecode 即可启动交互式 CLI:

gecode

在 CLI 中:

  • 输入您的自然语言请求,例如:“总结 src/core/orchestrator.ts 文件的用途。”
  • 使用 /conf 进入配置视图。
  • 使用 /clear 清除聊天历史记录。
  • 使用 /exit 退出应用程序。

项目结构

gecode/
├── .vscode/                 # VSCode 特定设置
├── src/
│   ├── cli/                 # 命令行界面组件 (Ink/React)
│   │   ├── components/      # 可重用 UI 组件
│   │   ├── index.ts         # CLI 入口点
│   │   └── ui.tsx           # 主要交互式 UI 逻辑
│   ├── config.ts            # 配置管理 (Conf 库)
│   ├── core/                # 核心 AI 逻辑 (编排、上下文构建、提示)
│   │   ├── contextBuilder.ts# 为 LLM 构建全面的项目上下文
│   │   ├── modelDispatcher.ts# 在直接/代理执行之间进行决策
│   │   ├── orchestrator.ts  # 主要 AI 代理编排器
│   │   ├── prompts.ts       # AI 系统提示
│   │   └── scheduler.ts     # 管理多文件编辑/任务
│   ├── services/            # LLM 提供商集成 (Gemini, OpenAI, Custom)
│   │   ├── customLlmProvider.ts # 自定义 OpenAI 兼容 LLM 的集成
│   │   ├── geminiProvider.ts# Google Gemini API 的集成
│   │   ├── index.ts         # LLM 提供商工厂
│   │   ├── llmProvider.ts   # LLM 提供商接口
│   │   └── openaiProvider.ts# OpenAI API 的集成
│   ├── tools/               # AI 可调用工具 (Bash, File, Git, List)
│   │   ├── bashTool.ts      # 执行 bash 命令
│   │   ├── fileTools.ts     # 读取和写入文件
│   │   ├── gitTool.ts       # 执行 git 操作
│   │   ├── listTool.ts      # 列出目录内容
│   │   └── tool.ts          # 所有工具的基类
│   └── utils/               # 工具函数 (环境、git、日志)
├── package.json             # 项目元数据和依赖项
├── tsconfig.json            # TypeScript 配置
├── README.md                # 本文件
├── LICENSE                  # 项目许可证
└── GEMINI.md                # gecode 的持久化草稿本/内存

许可证

本项目采用 MIT 许可证。详见 LICENSE 文件。

贡献

欢迎贡献!请提出问题或提交拉取请求。