@chrishdx/n8n-nodes-codex-cli-lm
v1.0.0
Published
n8n Community Node for Codex CLI as LangChain Chat Model
Maintainers
Readme
n8n-nodes-codex-cli-lm
Community node package for n8n that exposes the OpenAI Codex CLI as a LangChain Chat Model. It runs codex exec --experimental-json using the bundled @openai/codex-sdk binary or a custom codex binary.
Nodes included
- Codex Chat Model (SDK): LangChain chat model for the AI Agent node
- Codex Auth (Device Login): device login helper that writes
CODEX_HOME/auth.jsonon the n8n host
Features
- Codex CLI integration: spawns
codex exec --experimental-jsonand parses JSONL events - Auth: device login via CLI or UI-based device flow
- Model selection: static list in the node (Codex + GPT-5.x entries) or default model
- Security controls: sandbox mode, approval policy, optional web search + network access
- Advanced config:
--config key=valueoverrides and--enable/--disablefeature flags - Multimodal: attach local images or
data:images from LangChain content - Structured output: optional JSON Schema via
--output-schema - Streaming: optional response streaming from JSONL
item.updated/item.completed - Context controls: cap message count or prompt size in stateless mode
Requirements
- Self-hosted n8n only (uses
child_processand filesystem access) - n8n >= 1.0.0
- Node.js >= 18
- Codex CLI available via:
- Bundled binary from
@openai/codex-sdk, or - Custom Path to a
codexbinary on the n8n host
- Bundled binary from
Installation
Option 1: Install from npm (once published)
npm install @chrishdx/n8n-nodes-codex-cli-lmOption 2: Install locally for development
git clone <your-repo-url>
cd n8n-nodes-codex-cli-lm
npm install
npm run build
npm linkThen in your n8n installation directory:
npm link @chrishdx/n8n-nodes-codex-cli-lm
n8n startCredentials setup
Create Codex (SDK) API credentials in n8n:
- Codex Binary:
Bundled (via @openai/codex-sdk)orCustom Path - Codex Path: path to
codexif using Custom Path - Base URL (OPENAI_BASE_URL): optional proxy or enterprise gateway
- Codex Home (CODEX_HOME): optional; where Codex stores sessions/config (default:
~/.codex)
The credential test pings the OpenAI auth discovery endpoint to verify connectivity.
Login options (device flow + web)
You must complete Codex login on the same host/container where n8n runs so CODEX_HOME/auth.json is available.
Option A: CLI device login (recommended)
Run on the n8n host:
codex login --device-authThen open the provided URL in your browser and enter the device code. Make sure CODEX_HOME in n8n points to the same directory used by the CLI.
If you use the Bundled binary, codex might not be on your PATH. You can either:
- Use Custom Path and install
codexglobally, or - Run the bundled binary directly from
node_modules/@openai/codex-sdk/vendor/.../codex/codex
Option B: Web login from the n8n UI
Use the Codex Auth (Device Login) node:
- Start Device Login: run the node and copy
verificationUrlanduserCode - Open the URL in your browser, sign in, and enter the code
- Complete Device Login: provide
deviceAuthId+userCodefrom step 1 - (Optional) Login Status to check the current session
- (Optional) Logout to remove
auth.json
This writes tokens to CODEX_HOME/auth.json on the n8n host.
Using the Codex Chat Model
- Add an AI Agent node
- Select Codex Chat Model (SDK) as the chat model
- Choose your Codex (SDK) API credentials
- Configure Model and Options as needed
Options quick reference
- Working Directory and Additional Directories for filesystem access
- Sandbox Mode and Approval Policy for command execution safety
- Network Access Enabled and Web Search Enabled for external access
- Use Open Source Provider and Local Provider for
--ossflows - Output Schema (JSON) for structured results
- Stream Response for token streaming
- Max Messages and Max Prompt Characters for context control
Example workflow: Telegram chatbot
A simple flow that turns Telegram messages into Codex responses:
- Telegram Trigger (incoming message)
- AI Agent using Codex Chat Model (SDK)
- Telegram node (Send Message)
Typical mappings:
- AI Agent input: use Telegram
message.text(for example,{{$json.message.text}}) - Telegram response: map the AI Agent output text (inspect the AI Agent output in n8n and map its response field)
Notes / limitations
- This package runs a local binary and uses filesystem access. It is intended for self-hosted n8n.
- Tool calling is supported via a prompt-based JSON adapter. It is best-effort and depends on the model output.
Development
Project structure
n8n-nodes-codex-cli-lm/
├── credentials/
│ └── CodexCliApi.credentials.ts
├── nodes/
│ ├── CodexAuth/
│ └── LmChatCodexCli/
├── package.json
└── tsconfig.jsonScripts
npm run dev- Start n8n with hot reload for developmentnpm run build- Build the TypeScript codenpm run lint- Check code for errorsnpm run lint:fix- Auto-fix linting issues
Troubleshooting
Node does not appear in n8n
- Verify install:
npm list @chrishdx/n8n-nodes-codex-cli-lm - Restart n8n
- Check n8n logs for load errors
CLI execution fails
- Verify the Codex binary source (Bundled vs Custom Path)
- Check the binary manually:
codex --version - Ensure
CODEX_HOMEcontainsauth.jsonfrom device login
Login issues
- Confirm the login happened on the same host/container as n8n
- Use the Codex Auth (Device Login) node to re-run device login
License
MIT
Support
- GitHub Issues: https://github.com/chrishdx/n8n-nodes-codex-cli-lm/issues
- n8n Community Forum: https://community.n8n.io
Acknowledgments
- Built on the n8n community nodes starter
- Powered by LangChain
- Uses the official
@openai/codex-sdk
