@alhisan/gac
v1.3.0
Published
Terminal client for GPT4All running on localhost
Downloads
129
Readme
GAC CLI (gac)
Terminal client for OpenAI-compatible APIs (including GPT4All) and Ollama. Supports streaming responses, interactive chat, and configurable markdown rendering using terminal-kit.
Installation
Requirements: Node.js 18+ and a running OpenAI-compatible server (like GPT4All) or Ollama.
npm install -g @alhisan/gacOr if you don't want to install globally
npm install
node bin/gac.js --helpTest if it works:
gac --helpUsage
Single prompt:
gac -a "Hello gpt4all, how are you doing today?"
gac "How do I push to GitHub?"
gac suggest "How do I connect to ssh server on a custom port 5322?"
gac explain "How do I use rsync?"
gac suggest -d "Give me step-by-step instructions to set up an SSH server on port 5322"
gac runbook "Set up a new Node.js project with eslint"List models and set a default:
gac modelsThis opens an interactive selector. Use arrow keys + Enter to choose a model, or Ctrl+C/Esc to cancel.
Interactive mode:
gac chatExit chat with exit, quit, or Ctrl+C.
Flags:
--no-renderdisables markdown styling for that run.--debug-renderprints the raw model output after the rendered response.-d, --detailed-suggestenable more detailed, step-by-step suggestions insuggestmode (can also be set via config keydetailedSuggest).--detailed-contextinclude current directory context insuggest/explainprompts (can also be set via config keydetailedContext).
Configuration
Config file is created on first run:
- Primary:
~/.gac/config.json - Fallback:
.gac/config.json(when home is not writable)
View and edit:
gac config
gac config tui
gac config get baseUrl
gac config set baseUrl http://localhost:4891
gac config set model "Llama 3 8B Instruct"
gac config set markdownStyles.codeStyles '["#8be9fd"]'
gac config set detailedSuggest true
gac config set detailedContext trueCore settings
provider(string):openai(default) orollamabaseUrl(string): GPT4All server base, e.g.http://localhost:4891ollamaBaseUrl(string): Ollama base, e.g.http://localhost:11434apiKey(string): API key for OpenAI-compatible services (empty for local servers)model(string): model ID from/v1/modelstemperature(number)maxTokens(number)stream(boolean)requestTimeoutMs(number): request timeout in milliseconds (0 to disable). Useful for larger models or slower servers.defaultAction(string): default mode for direct prompts (suggest,ask, orexplain).detailedSuggest(boolean): whentrue,suggestmode returns more detailed, step-by-step suggestions.detailedContext(boolean): whentrue,suggest/explainprompts include the current directory andlsoutput.renderMarkdown(boolean)
Markdown styling
All markdown options live under markdownStyles:
headerStyles(array of styles)headerStylesByLevel(object, keys1–6→ array of styles)headerUnderline(boolean)headerUnderlineLevels(array of levels to underline)headerUnderlineStyle(array of styles)headerUnderlineChar(string, single character)codeStyles(array of styles)codeBackground(array of styles)codeBorder(boolean)codeBorderStyle(array of styles)codeGutter(string)codeBorderChars(object:topLeft,top,topRight,bottomLeft,bottom,bottomRight)
Style values can be:
- Terminal-kit style names like
bold,underline,dim,brightWhite - Foreground hex colors:
"#ffcc00" - Background hex colors:
"bg:#202020"or"bg#202020" - Default/transparent:
"default"(fg) or"bg:default"
Example:
{
"markdownStyles": {
"headerStylesByLevel": {
"1": ["bold", "brightWhite"],
"2": ["bold"],
"3": ["bold"],
"4": ["dim"],
"5": ["dim"],
"6": ["dim"]
},
"headerUnderline": true,
"headerUnderlineLevels": [1],
"codeStyles": ["#8be9fd"],
"codeBackground": ["bg:default"],
"codeBorderStyle": ["#444444"]
}
}Troubleshooting
If you see connection errors, verify the server is reachable:
curl http://[SERVER_ADDRESS]:[SERVER_PORT]/v1/modelsFor Ollama:
curl http://[SERVER_ADDRESS]:[SERVER_PORT]/api/tagsLicense
GNU General Public License v3.0. See LICENSE.
Disclaimer
This was mostly vibe coded and I'm treating it as a fun side project / tool that is likely to remain improved and updated by agentic models.
Some notes on runbook the command is kinda dangerous i tried adding some guard rails by making a list of blocked commands. However, please be responsible and keep in mind that the model may return some commands that need editing and the program will just execute these commands one by one without checking for any values or changes it should make before
