next-maia
v1.1.0
Published
Run Maia chess ONNX models in the browser — human-like move predictions conditioned on Elo, powered by onnxruntime-web and chess.js
Downloads
436
Readme
next-maia
Run Maia2 chess models entirely in the browser — human-like move predictions conditioned on Elo, powered by
onnxruntime-web.
Overview
next-maia is a TypeScript library that brings the Maia2 neural network to the browser. Maia2 is trained to play chess the way humans at specific Elo levels actually play — not optimally, but authentically. This library handles the full inference pipeline client-side with no server required.
Why an opening book? Maia2 was not trained on the first 5 plies, making its opening play unreliable (e.g. shuffling knights back and forth). next-maia includes a pre-computed ECO opening book that covers the early game, automatically handing off to the neural network once the position leaves the book.
Features
- Browser-native inference — ONNX model runs entirely in the browser via WebAssembly; no server-side compute needed
- ECO opening book — 3,639 opening lines covering all ECO codes (A–E); weighted-random move selection for natural variety
- Automatic model caching — downloaded models are stored in the Cache API for instant subsequent loads
- Elo-conditioned evaluation — pass self and opponent Elo ratings (1100–2000) to get skill-appropriate move predictions
- Full policy distribution — returns probabilities over all legal moves (UCI notation) and a white win probability
Installation
npm install next-maiaModel Preparation: Exporting ONNX Models
Because models are ~280MB, they cannot be bundled directly in the npm package. You must export them and host them (or place them in your web app's public/ folder).
1. Prerequisites
Ensure you are using Python 3.10+ and have cloned the official maia2 Python repository.
Note: If you get an externally-managed-environment error, use a virtual environment.
# 1. Create and activate a virtual environment
python3 -m venv venv
source venv/bin/activate
# 2. Install dependencies (relaxed versions for compatibility)
pip install onnx onnxruntime onnxscript gdown
pip install -e .2. The Export Script
Create export_onnx.py in the root of the maia2 Python repo:
import torch
from maia2 import model
for maia_type in ["rapid", "blitz"]:
print(f"--- Processing {maia_type.upper()} ---")
maia2_model = model.from_pretrained(type=maia_type, device="cpu")
maia2_model.eval()
# Define dummy inputs for tracing
dummy_boards = torch.randn(1, 18, 8, 8)
dummy_elo = torch.tensor([0], dtype=torch.long)
torch.onnx.export(
maia2_model,
(dummy_boards, dummy_elo, dummy_elo),
f"maia_{maia_type}_onnx.onnx",
export_params=True,
opset_version=18,
input_names=["boards", "elo_self", "elo_oppo"],
output_names=["logits_maia", "logits_side_info", "logits_value"],
dynamic_axes={"boards": {0: "batch_size"}, "elo_self": {0: "batch_size"}, "elo_oppo": {0: "batch_size"}}
)
print(f"Saved maia_{maia_type}_onnx.onnx")Quick Start
import Maia from "next-maia";
const engine = new Maia({
modelPath: "/models/maia_rapid_onnx.onnx", // Update path to point to your exported model
// Serve the onnxruntime-web WASM binaries via CDN or locally
wasmPaths: "https://cdn.jsdelivr.net/npm/[email protected]/dist/",
});
// Wait for the model to load and cache
await engine.Ready;
// Evaluate a position
const fen = "rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq - 0 1";
const result = await engine.evaluate(fen, 1500, 1500);
console.log(result.policy); // { "e7e5": 0.32, "c7c5": 0.18, ... }
console.log(result.value); // 0.48 — white win probability
console.log(result.fromBook); // true — move came from the opening bookAPI
new Maia(options)
Creates an engine instance and begins loading the ONNX model asynchronously.
| Option | Type | Required | Description |
| ------------------ | ------------------------------- | -------- | ------------------------------------------------------ |
| modelPath | string | Yes | URL or path to the .onnx model file |
| wasmPaths | ort.Env.WasmPrefixOrFilePaths | No | Custom paths for ONNX Runtime WASM binaries |
| externalDataPath | string | No | Path to an external .onnx.data file for split models |
engine.Ready
engine.Ready: Promise<boolean>Resolves to true when the model has loaded and is ready for inference. Always await this before calling evaluate.
engine.evaluate(fen, eloSelf, eloOppo)
engine.evaluate(fen: string, eloSelf: number, eloOppo: number): Promise<EvaluationResult>Evaluates a chess position. Checks the opening book first; falls back to ONNX inference for positions not in the book.
Parameters
| Parameter | Type | Description |
| --------- | -------- | ----------------------------------------------------------------------------------------------- |
| fen | string | Board position inFEN notation |
| eloSelf | number | Elo of the player to move (clamped to 1100–2000) |
| eloOppo | number | Elo of the opponent (clamped to 1100–2000) |
Returns Promise<EvaluationResult>
type EvaluationResult = {
policy: Record<string, number>; // UCI move → predicted probability, e.g. { "e2e4": 0.31, ... }
value: number; // White win probability in [0, 1]
fromBook: boolean; // true if the move was sourced from the opening book
};- When
fromBookistrue,policycontains a single entry with probability1andvalueis0.5. - When
fromBookisfalse,policyis a full softmax distribution over all legal moves.
How It Works
Inference pipeline
evaluate(fen, eloSelf, eloOppo)
│
├─► Opening book lookup
│ Position found → return book move (weighted random)
│ Position not found ↓
│
├─► Preprocess FEN
│ Mirror board if black's turn (model always sees white's perspective)
│ Encode to 18-channel 8×8 float tensor
│ Bucket Elo ratings into discrete categories (1100–2000, steps of 100)
│ Build legal move mask via chess.js
│
├─► ONNX inference (onnxruntime-web / WebAssembly)
│ Inputs: boards [1×18×8×8], elo_self [int64], elo_oppo [int64]
│ Outputs: logits_maia (policy head), logits_value (value head)
│
└─► Post-process
Mask illegal moves → softmax → policy probabilities
Convert value head → white win probability [0, 1]
Mirror moves back if position was flippedBoard encoding
The board is encoded as an [1, 18, 8, 8] float tensor:
| Channels | Content | | -------- | ---------------------------------------- | | 0–5 | White piece occupancy (P, N, B, R, Q, K) | | 6–11 | Black piece occupancy (p, n, b, r, q, k) | | 12 | Side to move (1.0 = white, 0.0 = black) | | 13–16 | Castling rights (K, Q, k, q) | | 17 | En passant target square |
Opening book
The book is pre-computed from the full ECO opening database (A00–E99, 3,639 lines). At build time, each opening line is replayed move-by-move and every intermediate position is indexed by its first four FEN fields (ignoring move clocks). The resulting opening-book.json maps each position to a frequency-weighted counter of known continuations.
At runtime, getBookMove looks up the position, filters to legal moves, and samples proportionally to frequency — so popular moves (e.g. 1. e4, 1. d4) appear more often, while rare sidelines still occur occasionally.
Project Structure
src/
├── index.ts # Public entry point — exports Maia class
├── constants/ # Board dimensions, Elo range, cache key
├── data/
│ ├── all_moves.json # UCI move → model index (8,835 entries)
│ ├── all_moves_reversed.json
│ └── opening-book.json # Pre-computed position → moves map (5,188 positions)
├── elo/ # Elo → discrete category mapping
├── encode/ # FEN → 18-channel float tensor
├── mirror/ # Board and move mirroring (black ↔ white perspective)
├── model/
│ ├── maia.ts # Maia class: constructor, evaluate()
│ ├── session.ts # ONNX session creation + Cache API model caching
│ └── process-outputs.ts # Softmax, legal move masking, value conversion
├── openings/ # Opening book lookup
└── preprocess/ # Preprocessing orchestrationDevelopment
Prerequisites
- Node.js 18+
- npm 9+
Setup
npm installBuild
npm run buildCompiles TypeScript to CommonJS in dist/.
Test
npm test # run all tests once
npm run test:watch # watch modeTests cover the four pure modules — opening book, Elo mapping, board mirroring, and tensor encoding — using Vitest. The Maia class itself is not unit-tested here since it depends on the ONNX runtime and a model file.
Dependencies
| Package | Version | Purpose |
| ------------------------------------------------ | ------- | ------------------------------------ |
| chess.js | ^1.4.0 | FEN parsing, legal move generation |
| onnxruntime-web | ^1.24.2 | ONNX model inference via WebAssembly |
License
This project relies on the implementation and models provided by the official Maia-2 project (CSSLab), which are licensed under the MIT License and released for broader community usage.
