npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

hasina-gemini-cli

v1.0.2

Published

Production-ready terminal AI chat application powered by the official Gemini API SDK.

Readme

Gemini CLI

Gemini CLI is a production-ready terminal chat application for Node.js that uses the official @google/genai SDK. It provides a polished CLI, streaming-friendly architecture, local JSON session storage, bounded conversation context, and a clean service-based structure that can grow into a multi-provider AI client later.

Features

  • Interactive terminal chat loop
  • Official Gemini API integration through @google/genai
  • Runtime model switching with /use-model
  • Numbered model chooser with /models
  • Session-level system prompt support
  • Bounded context memory for provider requests
  • Local JSON session persistence in data/sessions.json
  • Slash command system for session and runtime controls
  • Streaming-ready chat architecture with Gemini streaming enabled
  • Friendly, actionable terminal errors
  • Cross-platform support for Windows, macOS, and Linux

Folder Structure

Gemini-Cli/
  src/
    config/
      env.js
      gemini.js
    services/
      chat.service.js
      history.service.js
      session.service.js
      command.service.js
    utils/
      printer.js
      file.js
      validators.js
    app.js
    index.js
  data/
    sessions.json
  .env.example
  package.json
  README.md

Installation

Prerequisites

  • Node.js 20 or later
  • A Gemini API key from Google AI Studio

Note: the original product brief said Node.js 18+, but the current official @google/genai package requires Node.js 20+ as of March 10, 2026.

Quick install from GitHub

Install globally:

npm install -g github:Hasina69/Gemini-Cli

Run:

gemini-cli

Install from npm after publishing

Once the package is published on npm, install globally with:

npm install -g hasina-gemini-cli

Or run it directly without a global install:

npx hasina-gemini-cli

Local development install

Clone and install:

git clone https://github.com/Hasina69/Gemini-Cli.git
cd Gemini-Cli
npm install

Then create your environment file.

Environment file setup

For local repo usage, create .env in the project root.

Windows PowerShell:

Copy-Item .env.example .env

macOS/Linux:

cp .env.example .env

For global install usage, create .env in one of these locations:

  • Current working directory: .env
  • Windows shared config: %APPDATA%\GeminiCli\.env
  • macOS shared config: ~/Library/Application Support/GeminiCli/.env
  • Linux shared config: ~/.gemini-cli/.env

Update that .env with your Gemini API key and preferred defaults.

How To Create GEMINI_API_KEY In Google AI Studio

  1. Open Google AI Studio.
  2. Sign in with your Google account.
  3. Create a new API key.
  4. Copy the generated key.
  5. Paste it into your local .env file as GEMINI_API_KEY.

Environment Configuration

Use these variables in .env:

GEMINI_API_KEY=your_gemini_api_key_here
DEFAULT_MODEL=gemini-2.5-flash
MAX_HISTORY_MESSAGES=20
SYSTEM_PROMPT=You are a helpful terminal AI assistant focused on clear, accurate, and practical answers.

Variable Notes

  • GEMINI_API_KEY: required for authenticating to Gemini
  • DEFAULT_MODEL: the model loaded at startup
  • MAX_HISTORY_MESSAGES: max number of recent messages sent back to Gemini as context
  • SYSTEM_PROMPT: default system instruction for each new session
  • GEMINI_CLI_HOME: optional custom config directory for shared .env and session storage
  • GEMINI_CLI_SESSIONS_FILE: optional custom path for sessions.json

Running The App

If you used the global install:

gemini-cli

If you are running from the local repo:

Development mode:

npm run dev

Production mode:

npm start

Command Reference

| Command | Description | | --- | --- | | /help | Show all commands | | /exit | Exit the app cleanly | | /clear | Clear in-memory history for the current session | | /history | Show recent conversation messages | | /models | Open a numbered Gemini model chooser with backend version info | | /save | Persist the current session to local session storage | | /new | Start a fresh conversation session | | /model | Show the currently active model with backend version details | | /use-model <model_name> | Switch the active Gemini model at runtime | | /system | Show the active system prompt | | /set-system <text> | Override the current system prompt for this session | | /sessions | List saved local sessions | | /load <session_id> | Load a saved session from local storage |

Example Terminal Session

GEMINI CLI banner...

Info > Active model: gemini-2.5-flash
Info > Session ID: session_001
Info > Type a message to chat, use /models to choose a model, or /help to list commands.

You > /models
Command > Choose a Gemini Model
01. gemini-2.5-flash [active]
    Gemini 2.5 Flash
    version=2.5-flash | input=1M | output=65.5K
02. gemini-2.5-pro
    Gemini 2.5 Pro
    version=2.5-pro | input=1M | output=65.5K
Model > 2
Success > Active model changed to "gemini-2.5-pro" (version 2.5-pro).

You > Explain event loops in Node.js.
Gemini > The Node.js event loop coordinates timers, I/O callbacks, microtasks,
         and application work without blocking the main thread...

You > /save
Success > Session "session_001" saved to local session storage.

You > /exit
Info > Closing Gemini Terminal.

Architecture Notes

  • src/config/gemini.js contains the Gemini-specific provider wrapper
  • src/services/chat.service.js is provider-oriented and keeps request building separate from the terminal UI
  • src/services/history.service.js stores provider-neutral messages using { role, content }
  • src/services/session.service.js isolates local JSON persistence
  • src/services/command.service.js parses commands without depending on terminal rendering
  • src/utils/printer.js owns terminal presentation and loading behavior

This makes it straightforward to add future providers such as OpenAI or Claude without rewriting the app loop.

Notes About Changing Models

  • The startup model comes from DEFAULT_MODEL in .env
  • Use /model to inspect the active model
  • Use /models to open a numbered chooser and switch without typing the full model name
  • Use /use-model gemini-2.5-flash or another valid Gemini model name to switch at runtime
  • /use-model validates the model with the Gemini API before applying the change
  • If a saved session is loaded with /load, the saved model becomes the active model

Notes About Local Session Storage

  • Local repo mode stores sessions in data/sessions.json
  • Global install mode stores sessions in your shared config directory:
    • Windows: %APPDATA%\GeminiCli\sessions.json
    • macOS: ~/Library/Application Support/GeminiCli/sessions.json
    • Linux: ~/.gemini-cli/sessions.json
  • The app does not auto-save on exit; use /save when you want persistence
  • Each saved session stores:
    • session ID
    • creation and update timestamps
    • active model
  • active system prompt
  • provider-neutral messages
  • If data/sessions.json becomes malformed JSON, the app will stop with a clear storage error instead of silently overwriting data

One-Line Commands

Global install from GitHub:

npm install -g github:Hasina69/Gemini-Cli

Global install from npm after publishing:

npm install -g hasina-gemini-cli

Run with npx after publishing:

npx hasina-gemini-cli

Run after global install:

gemini-cli

Local clone and run:

git clone https://github.com/Hasina69/Gemini-Cli.git
cd Gemini-Cli
npm install
npm start

Troubleshooting

Missing GEMINI_API_KEY

If startup fails with a configuration error, confirm .env exists and includes a non-empty GEMINI_API_KEY.

Invalid Or Unsupported Model

If Gemini rejects a model:

  • verify the name with /use-model
  • try a known working model such as gemini-2.5-flash
  • confirm your API key has access to that model

Rate Limits Or Quota Errors

If you see rate limit or quota messages:

  • wait and retry later for rate limiting
  • check quota and billing in Google AI Studio for quota exhaustion

Node Version Errors

If npm install or startup fails on Node 18 or Node 19:

  • upgrade to Node 20+
  • reinstall dependencies after upgrading

Broken Session Storage

If the session storage file is manually edited and becomes invalid JSON:

  • fix the JSON manually

  • or replace it with:

    {
      "sessions": []
    }

Network Failures

If Gemini requests fail due to connectivity:

  • check your internet connection
  • retry after transient failures
  • verify corporate firewall or proxy rules are not blocking outbound HTTPS requests