@arach/ora
v0.0.2
Published
TypeScript-first text-to-speech runtime primitives and playback tracking.
Maintainers
Readme
Ora
Ora is a TypeScript-first text-to-speech runtime for provider integration, voice discovery, and synthesis execution.
Repository: github.com/arach/ora
The package is intentionally small:
- provider registration and request normalization
- credential injection and runtime lifecycle instrumentation
- stable voice discovery
- buffered and streaming synthesis
- optional playback utilities for apps that need stateful tracking
Why this exists
TTS providers and model hosts differ in API shape, authentication, and voice catalogs.
Ora gives you one integration surface so your app can:
- discover voices per provider
- synthesize speech through a stable request/response contract
- keep provider details and credentials out of host app business logic
Install
bun add @arach/ora@arach/ora also installs cleanly with pnpm add @arach/ora and
npm install @arach/ora.
The published package ships with:
- ESM and CommonJS entrypoints
- bundled TypeScript declarations for both module systems
- the
ora-workerCLI for local or remote worker processes
Basic setup
import {
OraBufferedInstrumentationSink,
OraMemoryCacheStore,
OraMemoryCredentialStore,
createOraRuntime,
createOpenAiTtsProvider,
} from "@arach/ora";
const runtime = createOraRuntime({
providers: [createOpenAiTtsProvider()],
cacheStore: new OraMemoryCacheStore(),
credentialStore: new OraMemoryCredentialStore(),
instrumentation: [new OraBufferedInstrumentationSink()],
});
const openai = runtime.provider("openai");
openai.setCredentials({
apiKey: process.env.OPENAI_API_KEY ?? "",
});
const voices = await openai.listVoices();
const response = await openai.synthesize({
text: "Hello from Ora.",
voice: voices[0]?.id ?? "alloy",
format: "mp3",
});
const providers = await runtime.listProviderSummaries();
const catalog = await runtime.catalog();
const cachedEntries = await runtime.queryCache({ provider: "openai" });response is normalized into a consistent schema regardless of backend:
response.audio carries the real asset reference for playback: inline bytes, a URL, or both.
Streaming
for await (const event of openai.stream({
text: "Streaming is optional.",
format: "wav",
preferences: {
priority: "responsiveness",
},
})) {
console.log(event.type, event.timeMs);
}The stream API is shared across providers that support it.
Remote worker
If you want to keep inference on another machine (for example a Mac mini), run a worker and connect it through createRemoteTtsProvider(...).
After installing the package, run the worker with your package manager's exec command:
bunx ora-worker init --host 0.0.0.0 --port 4020 --token dev-secret
bunx ora-worker serve --config .ora-worker/config.jsonWorker endpoints:
GET /healthGET /v1/providersGET /v1/providers/:providerGET /v1/providers/:provider/voicesGET /v1/catalogGET /v1/voicesGET /v1/cacheGET /v1/cache/:cacheKeyDELETE /v1/cache/:cacheKeyPOST /v1/audio/speechPOST /v1/audio/speech/stream
Advanced playback APIs
The following are optional if your app needs live synchronization:
OraPlaybackTrackerOraDocumentSessionOraPlaybackOrchestrator
These remain available for UI-driven reading or highlighting flows without being part of the core usage narrative.
Examples
bun run example:openaiwrites a sample speech file to.ora-output/openai-article-sample.mp3bun run example:openai-documentexercises paragraph-first document synthesisbun run example:orchestratorprints multi-unit orchestration state
Development
bun install
bun run setup:local
bun run test
bun run check
bun run build
bun run package:check
bun run docs:generate
bun run verifyThe local gateway still serves the repo-backed fallback homepage while docs are generated and proxied in the existing scripts.
