npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

cli-agent-openai-adapter

v0.1.0

Published

Adapter to convert CLI-based AI agents (Claude Code, etc.) to OpenAI-compatible API endpoints

Readme

cli-agent-openai-adapter

Convert CLI-based AI agents (Claude Code, etc.) to OpenAI ChatAPI endpoints.

Overview

This adapter allows you to use local CLI tools like Claude Code as drop-in replacements for OpenAI's API in your development environment, while keeping the same code structure for production.

Use Cases:

  • Production: Use OpenAI API (pay per token)
  • Development: Use local Claude Code with Haiku model (reduce costs)
  • Same Code: Switch between environments using the same API interface (e.g., LangChain's ChatOpenAI)

Default Model: This adapter uses Claude Haiku by default for cost efficiency during development. You can configure a different model (e.g., Sonnet, Opus) via the MODEL environment variable.

Features

  • ✅ OpenAI-compatible API endpoints (/v1/chat/completions)
  • ✅ Support for conversation history
  • ✅ Stateless execution (like OpenAI API)
  • ✅ Chat-only mode (tools disabled for safety)
  • ✅ TypeScript with full type definitions
  • 🚧 Claude Code adapter (initial implementation)
  • 🔜 Codex adapter (future)
  • 🔜 Gemini CLI adapter (future)

Demo

Try the adapter with the minimal, dependency-free web client:

# Start the adapter (project root)
npm ci
npm run build && npm start

# Open the client in your browser
# File path: examples/minimal-web-client/index.html

→ Minimal Web Client README

Installation

npm install -g cli-agent-openai-adapter

Or use directly with npx:

npx cli-agent-openai-adapter

Prerequisites

  • Node.js >= 20.0.0
  • Claude Code CLI installed and accessible in PATH

To verify Claude Code is installed:

claude --version

Usage

Start the Server

cli-agent-openai-adapter

By default, the server starts at http://localhost:8000.

Configuration

Configure using environment variables:

export ADAPTER_TYPE=claude-code  # Adapter to use
export MODEL=haiku                # Claude model to use (default: haiku)
export PORT=8000                  # Server port
export HOST=localhost             # Server host
export RUNTIME_DIR=./runtime      # Runtime directory (optional)
export TIMEOUT=30000              # Timeout in milliseconds
export DEBUG=true                 # Enable debug mode

Or create a .env file (requires dotenv).

Note: This adapter uses Haiku as the default model to reduce costs during development. You can change the model by setting the MODEL environment variable to sonnet or opus if needed.

Example with LangChain

import { ChatOpenAI } from "@langchain/openai";

// Development environment: via cli-agent-openai-adapter
const llmDev = new ChatOpenAI({
  configuration: {
    baseURL: "http://localhost:8000/v1"
  },
  modelName: "claude-code",
  apiKey: "dummy" // Not used but required by the SDK
});

// Production environment: OpenAI API directly
const llmProd = new ChatOpenAI({
  openAIApiKey: process.env.OPENAI_API_KEY,
  modelName: "gpt-4"
});

// Usage is identical
const response = await llmDev.invoke("Hello!");
console.log(response.content);

Example with OpenAI SDK

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "http://localhost:8000/v1",
  apiKey: "dummy" // Not used but required by the SDK
});

const response = await client.chat.completions.create({
  model: "claude-code",
  messages: [
    { role: "system", content: "You are a helpful assistant." },
    { role: "user", content: "Hello!" }
  ]
});

console.log(response.choices[0].message.content);

Example with Direct HTTP Request

curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-code",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

API Endpoints

POST /v1/chat/completions

OpenAI-compatible chat completions endpoint.

Request:

{
  "model": "claude-code",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello"}
  ],
  "temperature": 0.7,
  "max_tokens": 1000
}

Response:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "claude-code",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Hello! How can I help you?"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20,
    "total_tokens": 30
  }
}

GET /v1/models

List available models.

Response:

{
  "object": "list",
  "data": [
    {
      "id": "claude-code",
      "object": "model",
      "created": 1234567890,
      "owned_by": "cli-agent-openai-adapter"
    }
  ]
}

GET /health

Health check endpoint.

Response:

{
  "status": "ok",
  "adapter": "claude-code"
}

How It Works

Architecture

  1. Stateless Execution: Each request executes claude code --system-prompt "..." -p "..." independently
  2. Conversation History: Managed by the client (like OpenAI API), sent in the messages array
  3. Chat Mode: Tools are disabled via .claude/settings.json for chat-only behavior
  4. Output Cleaning: ANSI codes and progress indicators are removed from CLI output

Conversation History Handling

The adapter formats conversation history as JSON and includes it in the prompt:

System Prompt: [Your system message] + Context instruction

User Prompt:
Conversation history:
[
  {"role": "user", "content": "My favorite color is blue"},
  {"role": "assistant", "content": "That's nice!"}
]

Current user message: What is my favorite color?

This allows Claude to understand the full context while maintaining stateless execution.

Error Handling

The adapter handles various error scenarios:

  • Timeout (30s default): Returns HTTP 504 with timeout error
  • CLI tool not found: Fails at startup with clear error message
  • Invalid request: Returns HTTP 400 with validation error
  • Execution error: Returns HTTP 500 with error details

Troubleshooting

Claude Code not found

Error: claude-code is not available

Solution: Make sure Claude CLI is installed and accessible:

# Check if claude is in PATH
which claude

# Try running claude directly
claude --version

Timeout errors

Error: Claude Code execution timed out

Solution: Increase timeout:

export TIMEOUT=60000  # 60 seconds

Output contains noise

If responses contain ANSI codes or progress indicators, please report as an issue with examples.

Development

Setup

git clone https://github.com/pppp606/cli-agent-openai-adapter.git
cd cli-agent-openai-adapter
npm install

Run in Development Mode

npm run dev

Build

npm run build

Run Tests

# Run all tests
npm test

# Run tests in watch mode
npm run test:watch

# Run tests with coverage
npm run test:coverage

The project uses Jest for testing with full TypeScript support. All tests are located in src/__tests__/ directory.

Project Structure

cli-agent-openai-adapter/
├── src/
│   ├── adapters/
│   │   ├── base.ts           # Abstract base class
│   │   ├── claude_code.ts    # Claude Code implementation
│   │   └── factory.ts        # Adapter factory
│   ├── bin/
│   │   └── cli.ts            # CLI entry point
│   ├── server.ts             # Express server
│   ├── config.ts             # Configuration loader
│   ├── types.ts              # TypeScript types
│   └── index.ts              # Main exports
├── runtime/
│   └── claude-code/          # Claude Code runtime
│       └── .claude/
│           └── settings.json # Tool disable configuration
├── package.json
├── tsconfig.json
└── README.md

Future Enhancements

  • [ ] Support for streaming responses
  • [ ] Support for Codex CLI adapter
  • [ ] Support for Gemini CLI adapter
  • [ ] Configuration file support (.adaprc)
  • [ ] Better token estimation
  • [ ] Conversation history truncation/summarization
  • [ ] Logging and metrics
  • [ ] Docker support

License and Terms

This tool is provided under the MIT License.

Important: When using Claude Code through this adapter, you must comply with Anthropic's Terms of Service. Please use this tool in accordance with all applicable terms and conditions.

Contributing

Contributions are welcome! Please feel free to submit issues or pull requests.


Note: This is an early implementation. The actual behavior of Claude Code CLI options may require adjustments. Please test in your environment and report any issues.