npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

gwendoline

v0.3.0

Published

CLI based tool for interacting with language models directly from terminal

Downloads

457

Readme

Gwendoline

Gwendoline is a CLI based tool for interacting with language models directly from your terminal, allowing you to send prompts and receive responses via standard input and output.

It is using Ollama and some LLMs as default:

  • qwen3:4b for local usage
  • gpt-oss:120b-cloud for usage with a cloud model

Dependencies

Gwendoline depends on Ollama as a local runtime for language models. By default, it uses the qwen3:4b model for local processing via Ollama, and the gpt-oss:120b-cloud model for cloud-based requests. Both models are preconfigured and need to be installed with Ollama.

Anyway, an alternative model can be specified as CLI parameter to override the defaults.

Installation

npm install -g gwendoline

Usage

Use gwendoline or gwen on CLI to run.

Some examples of how to run it:

gwen

echo "Why is the sky blue?" | gwen

cat prompt.md | gwen
cat prompt.md | gwen --cloud
cat prompt.md | gwen --mcp
cat prompt.md | gwen --model gpt-oss:120b-cloud
cat prompt.md | gwen --model gpt-oss:120b-cloud > output.md
cat prompt.md | gwen --stream
cat prompt.md | gwen --thinking
cat prompt.md | gwen --stream --thinking
cat input.json | gwen --chat > output.json
cat input.json | gwen --chat --mcp > output.json
gwen --debug

Chat Mode Usage

Chat mode allows to run Gwendoline with a set of chat messages, including its roles etc.

This mode cannot be combined with streaming or thinking!

Create a file with input message first or pipe it. Then run with parameter --chat.

In chat mode, Gwendoline is expecting the input to be already a list of chat messages. This must already include at least the message, you want to ask now. The output will be a list of chat messages as well, including the response from LLM.

For example, create file chat.json with the content:

[{ "role": "user", "content": "Why is the sky blue?" }]

Run command:

cat chat-input.json | gwendoline --chat --model gpt-oss:120b-cloud > chat-output.json

Command Line Parameters

Gwendoline supports the following command line parameters to customize its behavior:

Model Selection

  • --cloud
    Uses the cloud-based model (gpt-oss:120b-cloud) instead of the local model.

  • --model <model-name>
    Specifies a custom model to use. Overrides both the default local and cloud models.
    Example: gwen --model llama3:8b

MCP Integration

  • --mcp
    Enables MCP (Model Context Protocol) client integration. Loads MCP tools from mcp.json in the current working directory.

    When enabled, Gwendoline will:

    • Connect to configured MCP servers (stdio, HTTP, SSE)
    • Make available MCP tools to the language model
    • Load server instructions and resources

Output Modes

  • --stream
    Enables streaming mode where the response is output in real-time as it's generated.
    Cannot be combined with --chat mode.

  • --thinking
    Enables thinking mode where the model's reasoning process is displayed before the actual response.
    Works best with --stream for real-time thinking output.
    Cannot be combined with --chat mode.

Chat Mode

  • --chat
    Enables chat mode for processing conversation history.
    Expects input in JSON format as an array of message objects with role and content fields.
    Output will also be in the same JSON format with the assistant's response appended.
    Cannot be combined with --stream or --thinking.

Debugging

  • --debug
    Enables debug mode with verbose logging including:
    • Tool calls requested by the LLM
    • Tool arguments and responses
    • Message flow between user, tools, and assistant
    • Raw MCP results

MCP Configuration

When using the --mcp parameter, Gwendoline looks for a mcp.json file in the current working directory. This file should define the MCP servers to connect to.

Example mcp.json:

{
  "servers": {
    "my-stdio-server": {
      "type": "stdio",
      "command": "node",
      "args": ["./server.js"]
    },
    "my-http-server": {
      "type": "http",
      "url": "http://localhost:3000/mcp"
    }
  }
}

The MCP client will:

  • Connect to all configured servers
  • Register their tools and make them available to the language model
  • Handle tool calls transparently
  • Provide structured error messages when tools fail