openai-harmony-js
v1.0.0
Published
TypeScript/JavaScript utilities for the GPT‑OSS Harmony format: renderers, parsers, tokenizers, and streaming helpers
Maintainers
Readme
openai-harmony-js
TypeScript/JavaScript utilities for the GPT‑OSS Harmony format: renderers, parsers, tokenizers, and streaming helpers.
Features
- Render a structured conversation to Harmony completion tokens
- Parse token arrays back into a typed conversation
- Tokenize raw completion strings and parse conversations directly from strings
- Detect Harmony-formatted strings
- Extract "analysis" (reasoning), "final", and "commentary" text from GPT‑OSS streams
- Incremental streaming parser for live updates
- ESM-first, TypeScript types included, Node ≥ 18
Installation
npm install openai-harmony-js
# or
yarn add openai-harmony-js
# or
pnpm add openai-harmony-js
# or
bun add openai-harmony-jsQuick start
import {
Conversation,
Message,
renderConversation,
parseTokens,
type HarmonyConversation,
} from "openai-harmony-js";
const convo: HarmonyConversation = Conversation.fromMessages([
Message.fromRoleAndContent("system", "You are a helpful assistant."),
Message.fromRoleAndContent("user", "Hello!"),
]);
// Render to Harmony completion tokens
const tokens = renderConversation(convo);
// Parse tokens back to a typed structure
const roundTripped = parseTokens(tokens);Parsing a Harmony completion string
If you receive a raw completion string containing Harmony markers like <|start|>, <|channel|>, <|message|>, and <|end|>, you can tokenize and parse directly:
import { tokenizeCompletionString, parseConversationFromString } from "openai-harmony-js";
const raw = "" + "<|start|>assistant" + "<|channel|>message<|message|>Hello there!" + "<|end|>";
const tokens = tokenizeCompletionString(raw);
const conversation = parseConversationFromString(raw);Extracting reasoning/final text from streams
GPT‑OSS models often stream Harmony strings that contain channels like analysis, final, and commentary. Use these helpers to extract text safely at any time:
import { extractReasoningContent, extractFinalContent } from "openai-harmony-js";
const streamed = "...<|channel|>analysis<|message|>thinking...<|end|>...";
const analysis = extractReasoningContent(streamed); // "thinking..."
const final = extractFinalContent(streamed); // prefers `final`, falls back to `commentary`Incremental streaming parser
Use HarmonyStreamParser to accumulate partial chunks and get incremental snapshots (current analysis/final/commentary, last channel, and completeness):
import { HarmonyStreamParser } from "openai-harmony-js";
const stream = new HarmonyStreamParser();
// In your streaming loop, call addContent with the latest chunk
const result1 = stream.addContent("<|start|>assistant<|channel|>analysis<|message|>plan");
// result1.currentAnalysis === "plan" (partial)
const result2 = stream.addContent(" more<|end|>");
// result2.currentAnalysis === "plan more"
// result2.isComplete indicates whether <|start|> and <|end|> counts match
// Access the full buffer if needed
const full = stream.getBuffer();Custom delimiters
By default, delimiters are <|start|>, <|message|>, and <|end|>. You can override them:
import { renderConversation, createParser, type HarmonyDelimiters } from "openai-harmony-js";
const custom: HarmonyDelimiters = {
start: "<<S>>",
message: "<<M>>",
end: "<<E>>",
};
const tokens = renderConversation({ messages: [] }, { delimiters: custom });
const parser = createParser();
for (const t of tokens) parser.push(t, custom);
const parsed = parser.finish();Roles and channels
- Roles:
system,developer,user,assistant,tool - Content chunk channels (structured):
message,reasoning,tool,function,error - String helper channels (raw Harmony strings):
analysis,final,commentary
This library accepts structured HarmonyMessage content with channels suited for token rendering, and also provides string-level helpers that target the GPT‑OSS convention (analysis/final/commentary) for easy extraction while streaming.
Encoding facade
import { loadHarmonyEncoding } from "openai-harmony-js";
const enc = loadHarmonyEncoding("HARMONY_GPT_OSS");
const tokens = enc.renderConversationForCompletion({ messages: [] });
const parsed = enc.parseMessagesFromCompletionTokens(tokens);API reference (surface)
renderConversation(conversation, options?) => string[]createParser() => { push(token, delimiters?), finish() }parseTokens(tokens, delimiters?) => HarmonyConversationtokenizeCompletionString(input, delimiters?) => string[]parseConversationFromString(input, delimiters?) => HarmonyConversationisHarmonyFormat(input, delimiters?) => booleanextractReasoningContent(input) => stringextractFinalContent(input) => stringHarmonyStreamParserclass for incremental parsingMessage.fromRoleAndContent(role, content)Conversation.fromMessages(messages)loadHarmonyEncoding(name)
Types (partial): HarmonyConversation, HarmonyMessage, HarmonyContentChunk, HarmonyRole, HarmonyChannel, HarmonyDelimiters, StreamParseResult.
Requirements
- Node.js 18 or newer
- ESM only (use
importsyntax)
License
MIT — see LICENSE.
Acknowledgements
This project mirrors concepts from the Harmony format by OpenAI and aims for parity with the Python reference where practical.
