hasina-gemini-cli
v1.0.2
Published
Production-ready terminal AI chat application powered by the official Gemini API SDK.
Readme
Gemini CLI
Gemini CLI is a production-ready terminal chat application for Node.js that uses the official @google/genai SDK. It provides a polished CLI, streaming-friendly architecture, local JSON session storage, bounded conversation context, and a clean service-based structure that can grow into a multi-provider AI client later.
Features
- Interactive terminal chat loop
- Official Gemini API integration through
@google/genai - Runtime model switching with
/use-model - Numbered model chooser with
/models - Session-level system prompt support
- Bounded context memory for provider requests
- Local JSON session persistence in
data/sessions.json - Slash command system for session and runtime controls
- Streaming-ready chat architecture with Gemini streaming enabled
- Friendly, actionable terminal errors
- Cross-platform support for Windows, macOS, and Linux
Folder Structure
Gemini-Cli/
src/
config/
env.js
gemini.js
services/
chat.service.js
history.service.js
session.service.js
command.service.js
utils/
printer.js
file.js
validators.js
app.js
index.js
data/
sessions.json
.env.example
package.json
README.mdInstallation
Prerequisites
- Node.js 20 or later
- A Gemini API key from Google AI Studio
Note: the original product brief said Node.js 18+, but the current official @google/genai package requires Node.js 20+ as of March 10, 2026.
Quick install from GitHub
Install globally:
npm install -g github:Hasina69/Gemini-CliRun:
gemini-cliInstall from npm after publishing
Once the package is published on npm, install globally with:
npm install -g hasina-gemini-cliOr run it directly without a global install:
npx hasina-gemini-cliLocal development install
Clone and install:
git clone https://github.com/Hasina69/Gemini-Cli.git
cd Gemini-Cli
npm installThen create your environment file.
Environment file setup
For local repo usage, create .env in the project root.
Windows PowerShell:
Copy-Item .env.example .envmacOS/Linux:
cp .env.example .envFor global install usage, create .env in one of these locations:
- Current working directory:
.env - Windows shared config:
%APPDATA%\GeminiCli\.env - macOS shared config:
~/Library/Application Support/GeminiCli/.env - Linux shared config:
~/.gemini-cli/.env
Update that .env with your Gemini API key and preferred defaults.
How To Create GEMINI_API_KEY In Google AI Studio
- Open Google AI Studio.
- Sign in with your Google account.
- Create a new API key.
- Copy the generated key.
- Paste it into your local
.envfile asGEMINI_API_KEY.
Environment Configuration
Use these variables in .env:
GEMINI_API_KEY=your_gemini_api_key_here
DEFAULT_MODEL=gemini-2.5-flash
MAX_HISTORY_MESSAGES=20
SYSTEM_PROMPT=You are a helpful terminal AI assistant focused on clear, accurate, and practical answers.Variable Notes
GEMINI_API_KEY: required for authenticating to GeminiDEFAULT_MODEL: the model loaded at startupMAX_HISTORY_MESSAGES: max number of recent messages sent back to Gemini as contextSYSTEM_PROMPT: default system instruction for each new sessionGEMINI_CLI_HOME: optional custom config directory for shared.envand session storageGEMINI_CLI_SESSIONS_FILE: optional custom path forsessions.json
Running The App
If you used the global install:
gemini-cliIf you are running from the local repo:
Development mode:
npm run devProduction mode:
npm startCommand Reference
| Command | Description |
| --- | --- |
| /help | Show all commands |
| /exit | Exit the app cleanly |
| /clear | Clear in-memory history for the current session |
| /history | Show recent conversation messages |
| /models | Open a numbered Gemini model chooser with backend version info |
| /save | Persist the current session to local session storage |
| /new | Start a fresh conversation session |
| /model | Show the currently active model with backend version details |
| /use-model <model_name> | Switch the active Gemini model at runtime |
| /system | Show the active system prompt |
| /set-system <text> | Override the current system prompt for this session |
| /sessions | List saved local sessions |
| /load <session_id> | Load a saved session from local storage |
Example Terminal Session
GEMINI CLI banner...
Info > Active model: gemini-2.5-flash
Info > Session ID: session_001
Info > Type a message to chat, use /models to choose a model, or /help to list commands.
You > /models
Command > Choose a Gemini Model
01. gemini-2.5-flash [active]
Gemini 2.5 Flash
version=2.5-flash | input=1M | output=65.5K
02. gemini-2.5-pro
Gemini 2.5 Pro
version=2.5-pro | input=1M | output=65.5K
Model > 2
Success > Active model changed to "gemini-2.5-pro" (version 2.5-pro).
You > Explain event loops in Node.js.
Gemini > The Node.js event loop coordinates timers, I/O callbacks, microtasks,
and application work without blocking the main thread...
You > /save
Success > Session "session_001" saved to local session storage.
You > /exit
Info > Closing Gemini Terminal.Architecture Notes
src/config/gemini.jscontains the Gemini-specific provider wrappersrc/services/chat.service.jsis provider-oriented and keeps request building separate from the terminal UIsrc/services/history.service.jsstores provider-neutral messages using{ role, content }src/services/session.service.jsisolates local JSON persistencesrc/services/command.service.jsparses commands without depending on terminal renderingsrc/utils/printer.jsowns terminal presentation and loading behavior
This makes it straightforward to add future providers such as OpenAI or Claude without rewriting the app loop.
Notes About Changing Models
- The startup model comes from
DEFAULT_MODELin.env - Use
/modelto inspect the active model - Use
/modelsto open a numbered chooser and switch without typing the full model name - Use
/use-model gemini-2.5-flashor another valid Gemini model name to switch at runtime /use-modelvalidates the model with the Gemini API before applying the change- If a saved session is loaded with
/load, the saved model becomes the active model
Notes About Local Session Storage
- Local repo mode stores sessions in
data/sessions.json - Global install mode stores sessions in your shared config directory:
- Windows:
%APPDATA%\GeminiCli\sessions.json - macOS:
~/Library/Application Support/GeminiCli/sessions.json - Linux:
~/.gemini-cli/sessions.json
- Windows:
- The app does not auto-save on exit; use
/savewhen you want persistence - Each saved session stores:
- session ID
- creation and update timestamps
- active model
- active system prompt
- provider-neutral messages
- If
data/sessions.jsonbecomes malformed JSON, the app will stop with a clear storage error instead of silently overwriting data
One-Line Commands
Global install from GitHub:
npm install -g github:Hasina69/Gemini-CliGlobal install from npm after publishing:
npm install -g hasina-gemini-cliRun with npx after publishing:
npx hasina-gemini-cliRun after global install:
gemini-cliLocal clone and run:
git clone https://github.com/Hasina69/Gemini-Cli.git
cd Gemini-Cli
npm install
npm startTroubleshooting
Missing GEMINI_API_KEY
If startup fails with a configuration error, confirm .env exists and includes a non-empty GEMINI_API_KEY.
Invalid Or Unsupported Model
If Gemini rejects a model:
- verify the name with
/use-model - try a known working model such as
gemini-2.5-flash - confirm your API key has access to that model
Rate Limits Or Quota Errors
If you see rate limit or quota messages:
- wait and retry later for rate limiting
- check quota and billing in Google AI Studio for quota exhaustion
Node Version Errors
If npm install or startup fails on Node 18 or Node 19:
- upgrade to Node 20+
- reinstall dependencies after upgrading
Broken Session Storage
If the session storage file is manually edited and becomes invalid JSON:
fix the JSON manually
or replace it with:
{ "sessions": [] }
Network Failures
If Gemini requests fail due to connectivity:
- check your internet connection
- retry after transient failures
- verify corporate firewall or proxy rules are not blocking outbound HTTPS requests
