mlx-ts
v0.0.4
Published
AI SDK provider for local MLX (Swift) models on Apple Silicon (macOS).
Downloads
5
Readme
mlx-ts
Local LLM inference on macOS using a Swift MLX host process + a TypeScript client / AI SDK provider.
This README is shown on the npm package page. The repo contains additional development notes.
Quickstart (end users)
Requirements
- macOS Apple Silicon (
darwin/arm64) - Node.js
Install
npm i mlx-tsDuring install, mlx-ts downloads a prebuilt mlx-host (Swift) binary + mlx.metallib from GitHub Releases (no Xcode required).
Use with the AI SDK
import { createMlxProvider } from "mlx-ts";
import { generateText, streamText } from "ai";
const modelId = "mlx-community/Llama-3.2-1B-Instruct-4bit";
const mlx = createMlxProvider({
model: modelId,
// optional:
// modelsDir: "/path/to/your/models-cache",
// hostPath: process.env.MLX_HOST_BIN,
});
const model = mlx.languageModel(modelId);
// stream
const s = await streamText({
model,
maxTokens: 64,
messages: [{ role: "user", content: "Say hello from a local MLX model." }],
});
for await (const chunk of s.textStream) process.stdout.write(chunk);
process.stdout.write("\n");
// one-shot
const g = await generateText({
model,
maxTokens: 64,
messages: [{ role: "user", content: "Summarize MLX in one sentence." }],
});
console.log(g.text);Runtime configuration
- Force CPU vs GPU: set
MLX_HOST_DEVICE=cpu(default isgpu). - Override host binary: set
MLX_HOST_BIN=/path/to/mlx-hostor pass{ hostPath }tocreateMlxProvider. - Default model cache dir: OS cache directory (macOS:
~/Library/Caches/mlx-ts/models). - Override where models are cached: pass
{ modelsDir }tocreateMlxProvideror setMLX_MODELS_DIR. - Override where
mlx-tsdownloads assets from: setMLX_TS_HOST_BASE_URL(base URL containingmlx-hostandmlx.metallib).
OpenCode integration
OpenCode supports OpenAI-compatible providers and allows setting options.baseURL (OpenCode Providers) and selecting models via provider_id/model_id (OpenCode Models).
mlx-ts ships a small OpenAI-compatible local server:
# Start local server (choose any MLX model id)
npx mlx-ts-opencode --model mlx-community/Llama-3.2-1B-Instruct-4bit --port 3755
# Generate an opencode.json snippet
npx mlx-ts-opencode --print-config --model mlx-community/Llama-3.2-1B-Instruct-4bit --port 3755 > opencode.json