neurospark-coder
v1.2.12
Published
Run Claude Code with NeuroSpark and other OpenAI-compatible providers.
Readme
neurospark-coder
Use Claude Code with OpenAI, Google, xAI, and other providers.
- Extremely simple setup - just a basic command wrapper
- Uses the AI SDK for simple support of new providers
- Works with Claude Code GitHub Actions
- Optimized for OpenAI's gpt-5 series
Get Started
# Use your favorite package manager (bun, pnpm, and npm are supported)
$ pnpm install -g neurospark-coder
# neurospark-coder is a wrapper for the Claude CLI
# `neurospark/`, `google/`, `xai/`, and `anthropic/` are supported
$ neurospark-coder --model neurospark/GLM5-FP8Switch models in the Claude UI with /model neurospark/GLM5-FP8 (or
/model neurospark/GLM5.1-FP8 for the 5.1 checkpoint).
Bundled Skills
neurospark-coder can also ship sanitized reusable skills alongside the CLI.
Users can inspect the packaged skills:
neurospark-coder --list-skillsInstall them into the default local skills directory:
neurospark-coder --install-skillsBy default this installs into an NScoder-skills folder in your current
working directory. To install somewhere else:
neurospark-coder --install-skills --skills-dir /path/to/skillsExisting skill folders are left untouched unless the user explicitly asks to replace them:
neurospark-coder --install-skills --forceThe bundled FHW skills in this repo are intentionally sanitized. Users still need to fill in their own internal hostnames, credentials, and foundry/library paths before using those workflows in a real lab environment.
Skills 使用说明(中文)
neurospark-coder 可以随 CLI 一起分发一组可复用的技能(skills)。这些技能主要用于硬件设计相关流程,例如:
ssh-servercompile-designdesign-lintrun-simgenus-synthesis
先查看当前打包的技能列表:
neurospark-coder --list-skills将技能安装到当前目录下默认的 NScoder-skills 文件夹:
neurospark-coder --install-skills如果你想安装到指定目录,可以这样执行:
neurospark-coder --install-skills --skills-dir /path/to/skills如果目标目录中已经存在同名技能,默认会跳过,不会覆盖。若要强制覆盖:
neurospark-coder --install-skills --force如何使用这些 skills
安装完成后,你会在目标目录下看到对应的技能文件夹,例如:
NScoder-skills/
├── ssh-server/
├── compile-design/
├── design-lint/
├── run-sim/
└── genus-synthesis/每个技能目录里至少会有一个 SKILL.md 文件。你可以先打开阅读它的说明,了解这个 skill 适用于什么任务、需要哪些输入参数,以及默认的工作流程。
例如:
sed -n '1,120p' NScoder-skills/genus-synthesis/SKILL.md这些 skills 的典型使用方式是:在 Codex 或 Claude Code 对话里,直接描述任务,并点名你要使用的 skill。模型会根据 SKILL.md 里的说明来执行。
例如:
请使用 genus-synthesis skill,帮我在远端服务器上跑这个设计的 Genus synthesis。请使用 design-lint skill,先帮我找这个 repo 里面实际在用的 lint flow,再检查我刚改过的 RTL。请使用 run-sim skill,先搜索这个项目真正使用的 top testbench,如果有多个候选请先列出来让我选。推荐的实际使用流程:
- 先用
--list-skills确认当前有哪些 skills。 - 用
--install-skills把 skills 安装到本地目录。 - 打开目标 skill 的
SKILL.md,确认它需要的输入信息。 - 在对话中明确说明你要使用哪个 skill,并给出 repo 路径、远端路径、top module、testbench、tool path 等必要信息。
- 如果 skill 是远端服务器相关的,先补齐你自己的主机地址、用户名、认证方式、library 路径等环境配置。
这些通用化后的硬件 skills 默认不会盲目假设设计名称、top module、testbench 或脚本路径,而是会优先搜索当前 repo 的真实入口;如果存在多个候选,它应该先列出来再让你确认。
本仓库内置的技能内容已经做过去敏处理,不包含真实的内网地址、账号密码、EDA license、foundry library 路径等敏感信息。要在真实环境中使用,请先根据你自己的实验室或服务器环境补充这些配置。
One-Command Install
For a private GitHub repo with a public customer install flow, publish the
neurospark-coder package to the public npm registry and host
scripts/install-neurospark-coder.sh
at a stable public URL such as https://install.neuro-spark.ai/install.sh.
Then customers can install both Claude Code and neurospark-coder with:
curl -fsSL https://install.neuro-spark.ai/install.sh | bashBy default, the installer runs Anthropic's official Claude Code installer and then:
npm install -g neurospark-coderIf you ever need to install from a tarball or alternate package source, set
PACKAGE_SOURCE:
PACKAGE_SOURCE=https://your-download-url/neurospark-coder-1.2.2.tgz \
curl -fsSL https://install.neuro-spark.ai/install.sh | bashPublish To npm
This repo includes publish-npm.yml for publishing the package from a private GitHub repository to the public npm registry.
One-time setup
- Create an npm access token with publish permissions.
- Add it to this GitHub repository as the
NPM_TOKENActions secret. - Host
scripts/install-neurospark-coder.shat your public install URL.
Release flow
- Bump the version in package.json.
- Push a tag that matches the package version, for example:
git tag v1.2.2
git push origin v1.2.2- GitHub Actions will run
bun install,bun run typecheck,bun run build, andnpm publish --access public.
After the workflow finishes, new installs and upgrades will use the latest npm package automatically:
npm install -g neurospark-coder@latestIf you prefer to publish manually from a machine that already has npm access:
bun install
bun run typecheck
bun run build
npm publish --access publicOptional GitHub Release Assets
This repo also includes release.yml
if you want a GitHub Releases-based distribution path. That is optional for the
public npm install flow and is not required for customers using the Cloudflare
or other hosted install.sh URL.
Configure API Key
For the NeuroSpark deployment, users only need to set OPENAI_API_KEY.
The proxy already defaults to https://api.neuro-spark.ai/v1, so OPENAI_API_URL is optional.
export OPENAI_API_KEY="your-neurospark-api-key"
neurospark-coder --model neurospark/GLM5-FP8To restrict the proxy to a single NeuroSpark model, set SUPPORTED_MODELS:
export OPENAI_API_KEY="your-neurospark-api-key"
export SUPPORTED_MODELS="neurospark/GLM5-FP8"
neurospark-coder --model neurospark/GLM5-FP8By default, the proxy exposes both neurospark/GLM5-FP8 and
neurospark/GLM5.1-FP8. Users only need to set SUPPORTED_MODELS if they
want to override that default.
GPT-5 Support
Use --reasoning-effort (alias: -e) to control OpenAI reasoning.effort. Allowed values: minimal, low, medium, high.
neurospark-coder --model neurospark/GLM5-FP8 -e highUse --service-tier (alias: -t) to control OpenAI service tier. Allowed values: flex, priority.
neurospark-coder --model neurospark/GLM5-FP8 -t priorityNote these flags may be extended to other providers in the future.
FAQ
Additional documentation:
What providers are supported?
See the providers for the implementation.
GOOGLE_API_KEYsupportsgoogle/*models.OPENAI_API_KEYsupportsneurospark/*models through the OpenAI-compatible adapter.XAI_API_KEYsupportsxai/*models.
Set a custom OpenAI endpoint with OPENAI_API_URL to use OpenRouter
ANTHROPIC_MODEL and ANTHROPIC_SMALL_MODEL are supported with the <provider>/ syntax.
How do I restrict which models clients can use?
Set SUPPORTED_MODELS to a comma-separated list of full model IDs exposed by the proxy.
SUPPORTED_MODELS=neurospark/GLM5-FP8,neurospark/GLM5.1-FP8With this set:
GET /v1/modelsreturns only the configured models.POST /v1/messagesreturns a400 invalid_request_errorifmodelis not in the allowlist.- If
SUPPORTED_MODELSis unset, the proxy defaults toneurospark/GLM5-FP8andneurospark/GLM5.1-FP8.
How does this work?
Claude Code has added support for customizing the Anthropic endpoint with ANTHROPIC_BASE_URL.
neurospark-coder spawns a simple HTTP server that translates between Anthropic's format and the AI SDK format, enabling support for any AI SDK provider (e.g., Google, OpenAI, etc.)
When launching Claude Code, neurospark-coder also sets a placeholder
ANTHROPIC_AUTH_TOKEN if you have not already configured ANTHROPIC_API_KEY
or ANTHROPIC_AUTH_TOKEN, and defaults
CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1. This matches the gateway/proxy
auth pattern supported by Claude Code and avoids unnecessary account login
prompts for local proxy usage.
