neochat
v3.7.1
Published
A CLI chatbot with support for multiple AI providers (ChatGPT, Claude, Gemini) — styled like Claude Code
Maintainers
Readme
neochat
A CLI chatbot that lets you choose between ChatGPT, Claude, and Gemini as the underlying model — styled after Claude Code.
Install
npm install -g neochatSetup
Set the API key(s) for the provider(s) you want to use:
export OPENAI_API_KEY="sk-..." # for GPT models
export ANTHROPIC_API_KEY="sk-ant-..." # for Claude models
export GEMINI_API_KEY="..." # for Gemini modelsneochat also auto-loads these keys from your shell rc files if they aren't already in the environment — handy when you launch it from a shell that didn't source them (non-login shells, IDE terminals, etc.):
zsh→~/.zshrc,~/.zshenv,~/.zprofile,~/.profilebash(default) →~/.bashrc,~/.bash_profile,~/.profile
Both export FOO=bar and plain FOO=bar forms are recognized, with
single/double quotes and trailing # comments handled. Values containing
shell expansions ($VAR, $(...), backticks) are skipped — export the
resolved value or set it in your current shell instead.
Usage
# Start with the default model (gpt-4o)
neochat
# Start with a specific model
neochat claude-sonnet-4-20250514
# Or set a default via env var
export NEOCHAT_MODEL=claude-sonnet-4-20250514
neochatSlash Commands
| Command | Description |
| --------------- | -------------------------------- |
| /model | List available models |
| /model <name> | Switch to a different model |
| /clear | Clear conversation history |
| /help | Show available commands |
| /exit | Quit neochat |
Supported Models
When you pick a provider in the interactive menu, neochat calls that
provider's models endpoint with your API key and shows you the live list
of models your account can actually use (OpenAI /v1/models, Anthropic
/v1/models, Gemini /v1beta/models). After you pick a model, neochat
sends a 1-token ping to confirm the key works and you have access to
that specific model — so you find out immediately, not on your first chat.
If the provider's list endpoint is unreachable, neochat falls back to a built-in list (kept as a safety net, not the source of truth):
OpenAI (fallback): gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
Anthropic (fallback): claude-sonnet-4-20250514, claude-opus-4-20250514, claude-haiku-4-20250514
Google (fallback): gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash, gemini-2.0-flash-lite
Web fetching
neochat auto-connects the upstream MCP Fetch server, which exposes a fetch tool that retrieves a URL and returns it as extracted markdown (or raw HTML). There is no self-hosted search — the model asks for specific URLs.
Requirements (one of):
# Preferred: uvx handles install automatically
# https://github.com/astral-sh/uv
uvx mcp-server-fetch --help # verify it runs
# Or install into your Python env:
pip install mcp-server-fetchNeochat detects uvx first and falls back to python -m mcp_server_fetch.
Filesystem access
neochat also auto-connects the upstream MCP Filesystem server, which gives the model sandboxed tools for reading, writing, editing, and navigating local files (read_text_file, write_file, edit_file, list_directory, search_files, …).
Access is restricted to the roots you authorize. By default the sandbox is the current working directory. To expand it, set NEOCHAT_FS_ROOTS to an OS-path-separator list before launching:
# Linux / macOS
export NEOCHAT_FS_ROOTS="$HOME/projects:$HOME/notes"
# Windows (PowerShell)
$env:NEOCHAT_FS_ROOTS = "C:\\projects;C:\\notes"
neochatRuns via npx -y @modelcontextprotocol/server-filesystem <roots...> — no install needed beyond npx (ships with Node).
Persistent memory
neochat auto-connects the upstream MCP Memory server — a knowledge-graph store the model uses to remember facts across sessions (user role, preferences, ongoing projects, feedback it shouldn't repeat).
Tools exposed: create_entities, create_relations, add_observations, delete_entities, delete_relations, delete_observations, read_graph, search_nodes, open_nodes.
The graph is persisted to $MEMORY_FILE_PATH (default ~/.neochat/memory.json). Point it elsewhere to share memory across machines (e.g. a synced directory) or to keep project-scoped graphs:
export MEMORY_FILE_PATH="$HOME/Dropbox/neochat-memory.json"
# or per-project
MEMORY_FILE_PATH=./.neochat-memory.json neochatDelete the file to wipe memory completely.
Sequential thinking
neochat auto-connects the upstream MCP Sequential Thinking server, which exposes a single sequentialthinking tool. The model uses it to externalize structured, revisable chains of reasoning for hard problems (non-trivial refactors, opaque debugging, design work). No configuration — runs via npx -y @modelcontextprotocol/server-sequential-thinking.
License
MIT
