@alumnium/langchain-codex
v0.1.0
Published
LangChain chat model for OpenAI via OAuth (ChatGPT Plus/Pro)
Readme
langchain-codex
LangChain chat model that uses OpenAI models via your ChatGPT Plus/Pro subscription (OAuth) instead of an API key.
Built on top of openai-oauth and @langchain/openai.
Installation
npm install langchain-codex @langchain/openaiYou must be logged in to the Codex CLI — the library reads your OAuth tokens from ~/.codex/auth.json.
Usage
Basic
import { ChatCodex } from "langchain-codex";
const llm = new ChatCodex(); // defaults to gpt-5.4
const response = await llm.invoke("Hello!");
console.log(response.content);
await llm.close(); // shut down the local proxy serverWith a Specific Model
const llm = new ChatCodex("gpt-5.4-mini");
// or
const llm = new ChatCodex({ model: "gpt-5.4-mini" });Structured Output
import { z } from "zod";
const llm = new ChatCodex("gpt-5.4-mini");
const structured = llm.withStructuredOutput(
z.object({
steps: z.array(z.string()),
answer: z.string(),
})
);
const result = await structured.invoke("How to make coffee?");
console.log(result.steps);
await llm.close();How It Works
ChatCodex extends ChatOpenAI and transparently starts a local OAuth proxy server (via openai-oauth) on first use. The proxy handles token management and exposes a standard OpenAI-compatible API on localhost. All LangChain features — tool calling, streaming, structured output — work out of the box.
The proxy server starts lazily on the first request and binds to a random available port. Call close() when done to shut it down.
Supported Models
Any model available through your ChatGPT subscription. At the moment of writing, the list included:
gpt-5.4,gpt-5.4-minigpt-5.3-codexgpt-5.2,gpt-5.2-codexgpt-5.1-codex-max,gpt-5.1-codex-mini
Image Input
Codex models require images to be provided as https:// URLs — base64-encoded images are not supported. By default, passing a base64 image will throw an error.
To handle this automatically, enable the litterbox upload feature. When enabled, base64 images are uploaded to litterbox.catbox.moe (a temporary file host) and replaced with the returned URL before being sent to the model. An in-memory cache ensures the same image is not uploaded twice during the lifetime of the instance.
// Enable via constructor option
const llm = new ChatCodex({ litterboxUpload: true });
// Or via environment variable
process.env.LANGCHAIN_CODEX_LITTERBOX_UPLOAD = "true"
const llm = new ChatCodex();Both image_url blocks with data: URIs and LangChain image blocks with inline base64 are handled. Images with https:// URLs are passed through unchanged.
You can control how long uploaded images are kept with the litterboxTtl option (defaults to "1h"):
const llm = new ChatCodex({ litterboxUpload: true, litterboxTtl: "24h" });
// Supported values: "1h", "12h", "24h", "72h"API
new ChatCodex(fields?)
Accepts all ChatOpenAI options except apiKey and configuration (managed internally), plus:
oauthServerOptions— Options passed to theopenai-oauthserver (e.g.,{ authFilePath: "/custom/path/auth.json" }).litterboxUpload— Enable automatic upload of base64 images to litterbox.catbox.moe. Also settable viaLANGCHAIN_CODEX_LITTERBOX_UPLOAD=true. Defaults tofalse.litterboxTtl— TTL for uploaded images:"1h"(default),"12h","24h", or"72h".
close(): Promise<void>
Shuts down the local OAuth proxy server. Call this when you're done using the instance.
License
MIT
