js-bao-wss-client
v1.0.20
Published
Client library for js-bao-wss Yjs WebSocket service
Downloads
404
Maintainers
Readme
JsBao Client Library
A TypeScript/JavaScript client library for js-bao-wss that provides HTTP APIs and real-time collaborative editing using Yjs. This README reflects the current implementation and replaces older docs that referenced removed options/behaviors.
Features
- Document Management: Create, list, update, delete via HTTP
- Permissions: Get/update/remove document permissions
- Invitations: Create/list/update/delete; accept/decline (invitee)
- Realtime Collaboration: Y.Doc sync over multi-tenant WebSocket
- Awareness: Presence/cursor broadcast and server-triggered refresh
- Auth/OAuth: Client-orchestrated OAuth and cookie refresh
- Passkey Authentication: WebAuthn/passkey support for passwordless sign-in
- Automatic Reconnect: Backoff + re-auth on 401
- Token Management: Proactive refresh in HTTP calls
- Analytics: Buffered event logging API with optional automatic lifecycle events
- Blob Storage: Upload/list/get/downloadUrl/delete per document with offline cache
- LLM: Chat API and model listing
- Workflows: Server-side multi-step processes with LLM, delays, and transformations
- Offline-first Open: Non-blocking open with IndexedDB-backed cache
- Offline Blob Cache: Cache API + IndexedDB backed uploads/reads with eviction and retry
- Network Controls: Online/offline modes, reachability, connection control
- Root Documents: Opt-in listing via
includeRoot
Installation
npm install js-bao-wss-client
# Peer dependencies
npm install yjs lib0Quick Start
Migrating from legacy decorators? Follow
src/client/docs/js-bao-v2-migration.mdbefore continuing—models must now be defined withdefineModelSchema/createModelClass.
1. Initialize the Client
initializeClient(options) constructs JsBaoClient, waits for the embedded database to be ready, and blocks until the new auth bootstrap sequence finishes (persisted JWT, cookie refresh, offline unlock, or OAuth handoff). Always await it before interacting with the client:
import {
initializeClient,
defineModelSchema,
createModelClass,
InferAttrs,
TypedModelConstructor,
} from "js-bao-wss-client";
import type { BaseModel } from "js-bao";
const contactSchema = defineModelSchema({
name: "contacts",
fields: {
id: { type: "id", autoAssign: true, indexed: true },
name: { type: "string", indexed: true },
email: { type: "string", indexed: true },
status: { type: "string", default: "Active" },
},
});
type ContactAttrs = InferAttrs<typeof contactSchema>;
interface Contact extends ContactAttrs, BaseModel {}
const Contact: TypedModelConstructor<Contact> = createModelClass({
schema: contactSchema,
});
async function bootstrap() {
const client = await initializeClient({
apiUrl: "https://your-api.example.com",
wsUrl: "wss://your-ws.example.com",
appId: "your-app-id",
token: "your-jwt-token", // optional for OAuth/bootstrap
// Optional: override the local query engine (defaults to SQL.js)
databaseConfig: { type: "node-sqlite", options: { filePath: "./local.db" } },
blobUploadConcurrency: 4, // optional (default 2 concurrent uploads)
models: [Contact],
// Optional behaviors
offline: true, // enabled by default; set false to disable IndexedDB doc persistence
auth: {
persistJwtInStorage: true, // optional: reuse short-lived JWT across reloads while valid
storageKeyPrefix: "my-app", // optional namespace when running multiple clients on same origin
},
autoOAuth: false,
oauthRedirectUri: "https://your-app.com/oauth/callback",
suppressAutoLoginMs: 5000,
autoUnlockOfflineOnInit: true,
autoNetwork: true,
connectivityProbeTimeoutMs: 2000,
onConnectivityCheck: undefined,
globalAdminAppId: "global-admin-app",
wsHeaders: undefined,
logLevel: "info",
maxReconnectDelay: 30,
});
return client;
}
const client = await bootstrap();Note: All following examples assume an async context (e.g., inside
async function main()or using top-level await) so thatawait initializeClient(...)is valid.
Default behaviors
offlinemode is enabled unless you passoffline: false.databaseConfigdefaults to{ type: "sqljs" }. Supply a different engine only if you need it.
2. Listen to Connection Events
// Connection status
client.on("status", ({ status, net }) => {
console.log("Connection status:", status, net); // status plus network snapshot
});
// Authentication events
client.on("auth-failed", ({ message }) => {
console.error("Auth failed:", message);
// Redirect user to login
});
client.on("auth-success", () => {
console.log("Authentication successful");
});
client.on("auth:onlineAuthRequired", () => {
// Went online without a token; prompt user to sign in
});
// Connection errors
client.on("connection-error", (error) => {
console.error("Connection error:", error);
});
// Connection close
client.on("connection-close", (event) => {
console.log("Connection closed:", event.code, event.reason);
});
// Network mode changes
client.on("networkMode", ({ mode }) => {
console.log("Network mode:", mode);
});
// Auth lifecycle
client.on("auth:state", (s) => console.log("Auth state:", s));
client.on("auth:logout", () => {});
client.on("auth:logout:complete", () => {});
// Offline grant lifecycle
client.on("offlineAuth:enabled", () => {});
client.on("offlineAuth:unlocked", () => {});
client.on("offlineAuth:renewed", () => {});
client.on("offlineAuth:revoked", () => {});
client.on("offlineAuth:failed", () => {});
client.on("offlineAuth:expiringSoon", ({ daysLeft }) => {});Analytics
The client exposes a buffered analytics queue that batches events, retries on reconnect, and shares storage with offline persistence. Use it to emit custom instrumentation or rely on the built-in automatic events described below.
Client API
client.analytics.logEvent({ action, feature, context_json?, ... }): enqueue a single event.context_jsonaccepts an object (auto-serialized) or a JSON string.client.analytics.flush(): attempt to send the queue immediately; also runs automatically on reconnect and right before unload/destroy.client.analytics.setPlanOverride(plan)and.setAppVersionOverride(version): stamp metadata onto every subsequent event until you clear or replace it.client.getLlmAnalyticsContext(): returns{ logEvent, isEnabled }when any LLM auto phases are enabled so higher-level features can coordinate their own analytics.
Queued events are persisted in IndexedDB when offline storage is active, so short offline windows or reloads do not drop data. Everything funnels through the same analytics.batch WebSocket channel used by the live event test.
Automatic events
All automatic emitters are on by default; pass analyticsAutoEvents when constructing the client to opt out per feature.
user_active_daily(feature: "session", toggle:analyticsAutoEvents.dailyAuth): first successful auth per calendar day.user_returned("session", respectsanalyticsAutoEvents.minResumeMs): fired when the tab becomes visible after being hidden long enough.context_json.triggerindicates"visibility"or"manual".client_boot("session", toggle:analyticsAutoEvents.boot): exactly once per client instance.first_doc_open/first_doc_edit("documents", toggles:analyticsAutoEvents.firstDocOpen,analyticsAutoEvents.firstDocEdit): include the triggeringdocumentId.offline_recovery("network", toggle:analyticsAutoEvents.offlineRecovery.enabled, throttled byminIntervalMs): logged when moving from offline back to online.sync_error("sync", toggle:analyticsAutoEvents.syncErrors.enabled): records thedocumentIdandreasonwhen flush/send attempts fail (with interval throttling).blob_upload_started/blob_upload_succeeded/blob_upload_failed("blobs", toggles:analyticsAutoEvents.blobUploads.{start|success|failure}): include blob/document identifiers, attempt counts, byte size, and retry details (with truncated error text on failure).service_worker_control/service_worker_token_update("service_worker", toggles:analyticsAutoEvents.serviceWorker.{control|tokenUpdate}): cover the bridge taking control and forwarding refreshed tokens (context_json.causewhen available).session_end("session", toggle:analyticsAutoEvents.sessionEnd): emitted onbeforeunloadandclient.destroy(), includingduration_msand exit reason.gemini_request_started/gemini_request_succeeded/gemini_request_failed("gemini", toggles:analyticsAutoEvents.gemini.{start|success|failure}or boolean): emitted forclient.gemini.generate()andclient.gemini.countTokens()lifecycles with model/context metadata.
When analyticsAutoEvents.llm enables any of start, success, or failure, client.getLlmAnalyticsContext() becomes non-null so LLM helpers can emit structured events without guessing configuration. analyticsAutoEvents.gemini controls client.getGeminiAnalyticsContext(), which powers automatic logging inside the Gemini namespace.
Example configuration:
const client = await initializeClient({
...options,
analyticsAutoEvents: {
firstDocEdit: false, // suppress milestone if your app logs a custom event
blobUploads: { start: false, success: true, failure: true },
llm: { start: true, success: true, failure: false },
},
});Manual client.analytics.logEvent(...) calls share the same queue and flush behaviour as the automatic stream, so custom events keep order/metadata without extra plumbing.
Auth Events Reference
- auth-failed: Access token invalid/expired and refresh failed. Use this to trigger reauthentication.
- Payload:
{ reason?: string; message?: string }
- Payload:
- auth-success: Authentication succeeded or token refreshed.
- Payload: none
- auth-refresh-deferred: Access token refresh was deferred due to connectivity issues. Use this to show "trying to reconnect" UI.
- Payload:
{ status: "scheduled" | "offline"; nextAttemptMs?: number; cause?: string }
- Payload:
- auth:onlineAuthRequired: Client attempted to go online without a token. Prompt for sign-in.
- Payload: none
- auth:logout: Logout flow started (explicit sign-out). Clear app state/stop sensitive activities.
- Payload: none
- auth:logout:complete: Logout flow finished.
- Payload: none
- auth:state: Generic auth state changes.
- Payload:
{ authenticated: boolean; mode: "online" | "offline" | "auto" | "none" }
- Payload:
Minimal example to react when reauthentication is needed:
const promptLogin = () => navigateToLogin();
client.on("auth-failed", promptLogin);
client.on("auth:onlineAuthRequired", promptLogin);
client.on("auth:state", ({ authenticated }) => {
if (!authenticated) promptLogin();
});Persisting short-lived JWTs (optional)
By default the client keeps the access token in memory and relies on the refresh cookie whenever a reload happens. You can opt-in to caching the current short-lived JWT in IndexedDB so that a refresh can be skipped while the token is still valid:
const client = await initializeClient({
...options,
auth: {
persistJwtInStorage: true,
storageKeyPrefix: "tenant-a", // optional namespace per app/user sandbox
},
});
const info = client.getAuthPersistenceInfo();
// => { mode: "persisted", hydrated: false | true }- The persisted token is only reused when it remains outside the refresh safety window (roughly 2 minutes before expiry). If the cached token is stale, the client falls back to the existing refresh flow.
storageKeyPrefixlets you isolate multiple client instances that run on the same origin (e.g., multi-tenant dashboards or tests).- Persistence is cleared automatically on logout, auth failures, or when you disable the feature. Apps that keep the default (
persistJwtInStorageomitted) continue to run fully in-memory. - Offline grants are unaffected; long-lived offline access still hinges on the encrypted grant workflow.
First-party refresh proxy
Safari and other strict browsers block third-party cookies, so you can opt into a same-origin refresh proxy by wiring the client through your app worker:
const client = await initializeClient({
...options,
auth: {
refreshProxy: {
baseUrl: `${window.location.origin}/proxy`,
cookieMaxAgeSeconds: 7 * 24 * 60 * 60, // optional override (defaults to 7 days)
},
},
});baseUrlshould be an absolute URL pointing to the first-party worker prefix that forwards to/app/:appId/api/auth/*.cookieMaxAgeSecondslets you shorten/extend the refresh cookie TTL; omit it to use the worker default.- Set
enabled: falsewhen you share config across environments but only want the proxy in production. - Leave
auth.refreshProxyundefined to preserve the existing direct-to-API behaviour. - In local Vite development the sample app leaves the proxy disabled; set
VITE_USE_REFRESH_PROXY=trueto test the worker path locally.
Document Lifecycle Events
// Fires once per open call as soon as the Y.Doc is created and local wiring is ready
client.on("documentOpened", ({ documentId }) => {
console.log("documentOpened:", documentId);
});
// Fires up to twice per open cycle after the initial wiring is complete:
// - once when initial data is loaded from IndexedDB (browser + offline: true) *after*
// the local query engine (SQL.js/SQLite) has replayed/indexed the data
// - once when the document first becomes synced with the server
client.on(
"documentLoaded",
({ documentId, source, hadData, bytes, elapsedMs }) => {
console.log("documentLoaded:", {
documentId,
source,
hadData,
bytes,
elapsedMs,
});
}
);
// Fires after a document is fully closed and all related resources are cleaned up
client.on("documentClosed", ({ documentId }) => {
console.log("documentClosed:", documentId);
});
// Notes:
// - 'indexeddb' emits only when offline persistence is enabled and IndexedDB is available,
// and only after the js-bao local query engine finishes connecting (SQLite/SQL.js indexes ready).
// - 'server' emits on first transition to synced per open cycle; hadData/bytes reflect server updates applied.
// - elapsedMs is measured from the start of documents.open for that document.
// - Unsubscribe listeners on unmount to avoid duplicate logs.documentMetadataChanged: payload details
The client emits documentMetadataChanged whenever local metadata changes or server metadata is merged into the local cache.
Payload shape:
{ documentId, metadata, changedFields?, action, source }- action:
"created" | "updated" | "evicted" | "deleted" - source:
"local" | "server" - changedFields: array of field names that changed (when applicable)
- metadata: an object with the most recent local view of metadata, or
nullforevicted/deleted
Metadata fields (may be partially present):
documentId: stringtitle?: stringlastKnownPermission?: "owner" | "read-write" | "reader" | "admin" | nullpermissionCachedAt?: string(ISO timestamp)lastOpenedAt?: string(ISO timestamp)lastSyncedAt?: string(ISO timestamp; updated on successful sync or server-merge)localBytes?: number(approx bytes of IndexedDB update store when available)hasUnsyncedLocalChanges?: booleanpendingCreate?: boolean(true for client-created docs pending server commit)createdAt?: string(ISO timestamp; local create time)localOnly?: boolean(true for offline-only documents)
Typical emissions:
- created/local: immediately after
documents.create(...)updates local cache.changedFieldsoften includescreatedAt,pendingCreate,localOnly, and optionallytitle. - updated/local: after local changes such as
documents.update(...)(optimistictitle), sync status updates (lastSyncedAt,hasUnsyncedLocalChanges), or localBytes refresh.
- created/local: immediately after
updated/server: after
documents.list({ refreshFromServer: true })or network-first list merges server metadata (e.g.,title,permission),changedFieldsreflects updated properties.evicted/local: after
documents.evict(id)ordocuments.evictAll(...);metadataisnull.deleted/server or local: the first delete seen (server push, list refresh, or local
documents.delete) emits a singledeletedevent; subsequent delete/evict/list refreshes for the same doc are suppressed to avoid duplicates (including 404/offline fallbacks after a successful delete).
Example listener:
client.on("documentMetadataChanged", (updates) => {
const u = Array.isArray(updates) ? updates[0] : updates;
if (!u) return;
// u: { documentId, metadata, changedFields?, action, source }
console.log(
"metadataChanged",
u.documentId,
u.action,
u.source,
u.changedFields
);
});Offline-first: Open Behavior
The client supports non-blocking open so UIs can render immediately from local cache while network work continues in the background.
const { doc, metadata } = await client.documents.open(documentId, {
// Non-blocking knobs
waitForLoad: "localIfAvailableElseNetwork", // "local" | "network" | "localIfAvailableElseNetwork" (default)
enableNetworkSync: true, // false => per-doc manual start
retainLocal: true, // keep local cache on close
availabilityWaitMs: 30000, // network availability timeout (when needed)
});
// Manual start if you opened with enableNetworkSync: false
await client.startNetworkSync(documentId);Events:
- documentOpened: Emitted once the Y.Doc exists and wiring is ready (before load events)
- documentLoaded: Per source (
indexeddb/server); theindexeddbleg waits for replay plus SQLite/SQL.js indexing, theserverleg fires on first sync. Payload{ documentId, source, hadData, bytes?, elapsedMs }. - documentClosed: Emitted after a document is closed and cleanup completes
- permission: Emitted when permission changes; upgrade to write triggers a sync (respecting start mode)
- documentMetadataChanged: Unified metadata event. Payload shape:
{ documentId, metadata, changedFields?, action, source }action:"created" | "updated" | "evicted" | "deleted"source:"local" | "server"metadata: may benullforevicted/deleted
- pendingCreateCommitted / pendingCreateFailed
- Existing:
sync,status,awareness,connection-error,connection-close
Permission changes auto-sync
When a document transitions from non-writable to writable (e.g., reader → read-write), the client automatically runs a sync so earlier local edits are pushed (subject to the document's start mode).
Network Status / Offline Mode
client.getNetworkStatus(); // { mode: "auto" | "online" | "offline", transport: "connected"|"connecting"|"disconnected", isOnline: boolean, connected?: boolean, lastOnlineAt?: string, lastError?: string }
client.isOnline(); // boolean
await client.setNetworkMode("offline");
await client.goOffline();
await client.goOnline();
// HTTP requests fail fast in offline modeMetadata Cache and Local Documents
The client maintains an IndexedDB-backed metadata index so apps can render lists and document summaries offline. Local listing is merged into documents.list(...); the former documents.listLocal() is removed.
// List documents (cache-first with background refresh by default)
const docs = await client.documents.list({
includeRoot: false,
// Default behavior is cache-first with background refresh when local cache exists
// You can control it explicitly with waitForLoad (see below)
waitForLoad: "localIfAvailableElseNetwork",
});
// List currently open documents (ids)
const open = await client.documents.listOpen();
// Get cached local metadata for a document
const meta = await client.documents.getLocalMetadata(documentId);
// Evict local data for a document (keeps remote doc intact)
await client.documents.evict(documentId);
// Evict all local data; onlySynced=true avoids unsynced-loss
await client.documents.evictAll({ onlySynced: true });
// Configure global retention
client.setRetentionPolicy({
// e.g., { maxDocs?: number, maxBytes?: number, ttlMs?: number, defaultRetain?: "persist" | "session" }
});Notes:
- The client updates the local metadata cache automatically when
documents.list()returns server data (including last-known permission and root doc metadata). Root is always cached from the server but filtered out of the returned list unless you passincludeRoot: true. - Cache updates emit
documentMetadataChangedevents (typically withaction: "updated",source: "server"). - Local eviction emits
documentMetadataChangedwithaction: "evicted",metadata: null. - Delete emits a single
documentMetadataChangedwithaction: "deleted", then evicts locally without a second emission.
Listing options (waitForLoad)
documents.list supports the same high-level loading modes as documents.open via waitForLoad:
- "local": return local metadata immediately; no blocking network wait. If no local metadata exists, returns an empty list. If
refreshFromServeris true (default), a background refresh runs to update the local cache. - "network": block until the server responds (up to
serverTimeoutMs, default 10000ms). If the client is explicitly offline, the call fails fast with codeLIST_UNAVAILABLE_OFFLINE. - "localIfAvailableElseNetwork" (default):
- If any cached metadata exists, return it immediately and, when
refreshFromServeris true, refresh in the background. - If no cached metadata exists, block on the server (like "network").
- If any cached metadata exists, return it immediately and, when
Additional flags:
includeRoot?: boolean— include per-user root document(s) in the results. This strictly followswaitForLoad; it does not force a blocking network call.refreshFromServer?: boolean(default true unlesslocalOnlyis true) — controls whether a server request is made at all. Background refresh is only started when the primary flow returns immediately (i.e., it does not already block on network).localOnly?: boolean— short-circuits and returns only documents that have local data; no network access.serverTimeoutMs?: number— timeout for the blocking network path (when applicable).
Pagination and tags
- Paging params:
limit,cursor,forward,returnPage. Default sort is bygrantedAt(document permission grant time) descending; passforward: truefor ascending. returnPage: truereturns{ items, cursor }(backward-compatible array when omitted). WithrefreshFromServer: true, a background page walker fetches the remaining pages and updates the cache/documentMetadataChangedevents; setrefreshFromServer: falseif you only want the first page. Note: cursors come from server responses; local-only/local-first paths do not fabricate cursors or enforcelimitsizing—usereturnPage: truewhen you need a cursor even if data is cached. To synchronously walk all pages yourself, loop on{ returnPage: true, limit, cursor }untilcursoris null. To hydrate the full cache in one call, useclient.syncMetadata({ scope: "all", pageSize, includeRoot })and then list withrefreshFromServer: false.- Server page size defaults to 100 when
limitis omitted; you can request smaller/larger pages within server limits. - Tag filtering:
tag: stringperforms an exact-match filter server-side. Responses includetags: string[]andgrantedAt; both are cached locally and usable offline (e.g., list +tagwill be filtered from cache when offline/local-only). Root is also considered for tag queries—if you tag the root, it can appear in tag-filtered results even when the default list filters root out. WhenrefreshFromServeris true with a tag, the client still fetches the full dataset in the background to keep the cache complete, then filters locally for the tag. - Tag CRUD: server endpoints exist (
POST /documents/:id/tags { tag },DELETE /documents/:id/tags/:tag) and document create accepts optional tags; the high-level client currently exposes tagging via HTTP helpers orclient.makeRequest.
Offline behavior:
- In explicit offline mode, "network" or the network leg of "localIfAvailableElseNetwork" fails fast with
LIST_UNAVAILABLE_OFFLINE. "local" returns the local list (and skips background refresh).
Local-first Document Creation (Client-generated ULIDs)
Create documents locally-first with a client-generated ULID. The client returns metadata immediately and, when not localOnly, marks as a pending create that is auto-committed when online.
// Local-first create (returns metadata)
const { metadata } = await client.documents.create({ title: "Draft" });
const id = metadata.documentId;
// Optional: manual commit (default onExists: "link")
await client.documents.commitOfflineCreate(id, { onExists: "link" });
// Start sync (if you opened with manual start)
await client.startNetworkSync(id);
// Introspection
const pending = await client.documents.listPendingCreates();
const isPending = await client.documents.isPendingCreate(id);
await client.documents.cancelPendingCreate(id);Events: pendingCreateCommitted, pendingCreateFailed help drive UI state.
Root Documents
Some apps use a per-user root document. The server always returns the root in list responses (unless tag-filtered), and the client caches it. By default documents.list() filters it out; pass includeRoot: true to surface it (works offline after it’s cached).
// Exclude root (default)
const docs = await client.documents.list();
// Include root document(s)
const all = await client.documents.list({ includeRoot: true });Behavior changes
- Root documents listing:
documents.list()excludes root docs by default. Opt-in with{ includeRoot: true }. - Offline mode requests: When
networkModeis"offline", HTTP calls fail fast with codeOFFLINE. - Open options:
documents.open()uses{ waitForLoad, enableNetworkSync, retainLocal, availabilityWaitMs }. Older options likewaitForPermission,offlineWritePolicy, per-docoffline,provisionalPermission, andstartNetworkare removed. - Create return shape:
documents.create()returns{ metadata }(noY.Doc). - Pending create events:
pendingCreateCommitted/pendingCreateFailedonly (nopendingCreateat creation time). - Evict-all flag:
documents.evictAll({ onlySynced })(replacesonlyUnsynced).
OAuth Authentication
// Check if OAuth is available
const hasOAuth = await client.checkOAuthAvailable();
if (hasOAuth) {
// Start OAuth flow (redirects to Google)
await client.startOAuthFlow();
}
// Handle OAuth callback (in your callback page)
const urlParams = new URLSearchParams(window.location.search);
const code = urlParams.get("code");
const state = urlParams.get("state");
// Without constructing a client on the callback route:
import { JsBaoClient } from "js-bao-wss-client";
if (code && state) {
try {
const token = await JsBaoClient.exchangeOAuthCode({
apiUrl: API_URL,
appId: APP_ID,
code,
state,
});
// Persist token in your auth store and initialize the client later
// (example only; use your own secure storage strategy)
localStorage.setItem("jwt", token);
} catch (error) {
console.error("OAuth callback failed:", error);
}
}
// Later (e.g., after redirecting back to your app shell):
import { initializeClient } from "js-bao-wss-client";
const client = await initializeClient({
apiUrl: API_URL,
wsUrl: WS_URL,
appId: APP_ID,
token,
databaseConfig: { type: "sqljs" },
});
// Check authentication status
if (client.isAuthenticated()) {
console.log("User is authenticated");
const token = client.getToken();
}
// Manually set token
client.setToken("new-jwt-token");Magic Link Authentication
The client supports passwordless email authentication via magic links. Magic links must be enabled in the admin console for your app.
Request Magic Link
// Send a magic link email to the user
await client.magicLinkRequest("[email protected]");Handle Magic Link Callback
// In your callback page (e.g., /oauth/callback)
const params = new URLSearchParams(window.location.search);
const magicToken = params.get("magic_token");
if (magicToken) {
// Verify the token and complete authentication
const { user, promptAddPasskey, isNewUser } = await client.magicLinkVerify(magicToken);
console.log("Logged in as:", user.email);
// isNewUser is true if this is the user's first sign-in (account was just created)
if (isNewUser) {
// Show onboarding flow for new users
}
// If promptAddPasskey is true, consider prompting the user to add a passkey
if (promptAddPasskey) {
// Show UI to add passkey for future logins
}
}Passkey Authentication
The client supports WebAuthn/passkey authentication for passwordless sign-in. Passkeys must be enabled in the admin console for your app.
Note: Passkeys can only be added to existing accounts (created via OAuth or Magic Link). To use passkey authentication:
- User creates account via OAuth or Magic Link
- User adds a passkey to their account
- User can then sign in with the passkey on future visits
Check Auth Methods Availability
// Get auth configuration for the app
const config = await client.getAuthConfig();
// Check available authentication methods
if (config.hasPasskey) {
console.log("Passkeys are available");
}
if (config.magicLinkEnabled) {
console.log("Magic link sign-in is available");
}
if (config.hasOAuth) {
console.log("Google OAuth is available");
}Sign In with Passkey
import { startAuthentication } from "@simplewebauthn/browser";
// 1. Get authentication options
const { options, challengeToken } = await client.passkeyAuthStart();
// 2. Authenticate with browser
const credential = await startAuthentication({ optionsJSON: options });
// 3. Complete authentication (sets token internally)
const { user, isNewUser } = await client.passkeyAuthFinish(credential, challengeToken);
console.log("Logged in as:", user.email);
// isNewUser is true if this is the user's first sign-in to this app
// (Note: for passkeys, this is rare since passkeys are added to existing accounts)
if (isNewUser) {
// Show onboarding flow
}Add Passkey to Existing Account
import { startRegistration } from "@simplewebauthn/browser";
// User must be authenticated
// 1. Get registration options
const { options, challengeToken } = await client.passkeyRegisterStart();
// 2. Create passkey with browser
const credential = await startRegistration({ optionsJSON: options });
// 3. Complete registration
await client.passkeyRegisterFinish(credential, challengeToken, "MacBook Pro");Manage Passkeys
// List user's passkeys
const { passkeys } = await client.passkeyList();
console.log(passkeys); // [{ passkeyId, deviceName, createdAt, lastUsedAt }]
// Update a passkey's device name
const { passkey } = await client.passkeyUpdate(passkeyId, {
deviceName: "Work MacBook",
});
// Delete a passkey
await client.passkeyDelete(passkeyId);Document Management
Create and List Documents
// Create a new document (returns metadata)
const { metadata } = await client.documents.create({
title: "My New Document",
});
console.log("Created document:", metadata.documentId);
// List all documents user has access to
const documents = await client.documents.list();
// Get document details (network)
const docInfo = await client.documents.get(documentId);
console.log("Document:", docInfo.title, docInfo.permission);
// Update document (root titles cannot be changed)
const updatedDoc = await client.documents.update(documentId, {
title: "Updated Title",
});
// Delete a document (offline/pending-create/not-found handled by local eviction)
// Throws if the document is currently open unless you force-close it first
await client.documents.delete(documentId, { forceCloseIfOpen: true }); // auto-closes if openDocument Aliases
// Create or update an app-scoped alias
await client.documents.aliases.set({
scope: "app",
aliasKey: "home",
documentId,
});
// User-scoped alias (defaults userId to the current user)
await client.documents.aliases.set({
scope: "user",
aliasKey: "current-draft",
documentId,
});
// Resolve an alias
const alias = await client.documents.aliases.resolve({
scope: "app",
aliasKey: "home",
});
// Open a document via alias (same return shape as documents.open)
const { doc } = await client.documents.openAlias({
scope: "app",
aliasKey: "home",
});
// List aliases for a document (admin-only on the server)
const aliases = await client.documents.aliases.listForDocument(documentId);
// Delete an alias (no error if already missing)
await client.documents.aliases.delete({ scope: "app", aliasKey: "home" });Atomic Create with Alias
Create a document and alias in a single atomic operation. This is an online-only operation that only creates the document if the alias doesn't already exist:
// Create document with app-scoped alias (requires admin/owner role)
const result = await client.documents.createWithAlias({
title: "Home Page",
alias: {
scope: "app",
aliasKey: "home",
},
});
console.log(result.documentId); // The created document ID
console.log(result.alias.aliasKey); // "home"
console.log(result.alias.documentId); // Same as documentId
// Create document with user-scoped alias
const userResult = await client.documents.createWithAlias({
title: "My Draft",
alias: {
scope: "user",
aliasKey: "current-draft",
},
});
// Attempting to create with existing alias throws HTTP 409
try {
await client.documents.createWithAlias({
title: "Another Home",
alias: { scope: "app", aliasKey: "home" },
});
} catch (err) {
console.log("Alias already exists");
}Differences from separate create() + aliases.set():
- ✅ Atomic: Document is only created if alias doesn't exist
- ✅ No race conditions: Server-side transaction ensures consistency
- ✅ Cleaner error handling: Single 409 error if alias exists (no orphaned documents)
- ❌ Online only: Requires network connection (no offline support)
Use createWithAlias() when you need guaranteed uniqueness based on an alias (e.g., "only one home page per app"). Use regular create() + aliases.set() when offline support is needed or when the document should be created regardless of alias conflicts.
Manage Permissions
// Get document permissions
const permissions = await client.documents.getPermissions(documentId);
permissions.forEach((perm) => {
console.log(`${perm.email}: ${perm.permission}`);
});
// Grant permission to a user
await client.documents.updatePermissions(documentId, {
userId: "user-123",
permission: "read-write", // 'owner' | 'read-write' | 'reader'
});
// Batch update permissions
await client.documents.updatePermissions(documentId, {
permissions: [
{ userId: "user-1", permission: "read-write" },
{ userId: "user-2", permission: "reader" },
],
});
// Remove permission
await client.documents.removePermission(documentId, userId);
// Validate access to a document
const accessResult = await client.documents.validateAccess(documentId);
if (accessResult.hasAccess) {
console.log("User has access:", accessResult.permission);
if (accessResult.viaInvitation) {
console.log("Access via invitation");
}
}Blob Storage
Blobs are stored per document and inherit document permissions. The client exposes a BlobsAPI namespace under documents.
Access patterns:
const blobs = client.document(documentId).blobs();Upload a Blob
const data = new TextEncoder().encode("hello blob");
const { blobId, numBytes, contentType } = await blobs.upload(data, {
filename: "hello.txt",
contentType: "text/plain",
disposition: "attachment", // or "inline"
// sha256Base64?: optional; computed automatically if omitted
});Alternate single-step helper
// Convenience wrapper that returns { blobId, numBytes }
const { blobId, numBytes } = await client
.document(documentId)
.blobs()
.uploadFile(new TextEncoder().encode("hello alt"), {
filename: "alt.txt",
contentType: "text/plain",
});List Blobs (with pagination)
const page1 = await client.document(documentId).blobs().list({ limit: 10 });
page1.items.forEach((b) => {
console.log(b.blobId, b.filename, b.size);
});
if (page1.cursor) {
const page2 = await blobs.list({ cursor: page1.cursor });
}Get Blob Metadata
const meta = await client.document(documentId).blobs().get(blobId);
console.log(meta.filename, meta.contentType, meta.size);Get a Download URL
// Returns a direct Worker URL (no presign). Add `disposition` to control attachment vs inline.
const url = client
.document(documentId)
.blobs()
.downloadUrl(blobId, { disposition: "attachment" });
// e.g., use in browser: window.location.href = urlDelete a Blob
await client.document(documentId).blobs().delete(blobId); // { deleted: true }Read Cached Blobs (different shapes)
const text = await client.document(documentId).blobs().read(blobId, {
as: "text",
});
const arrayBuffer = await client
.document(documentId)
.blobs()
.read(blobId, { as: "arrayBuffer" });
const blobObj = await client.document(documentId).blobs().read(blobId, {
as: "blob",
});
const bytes = await client.document(documentId).blobs().read(blobId, {
as: "uint8array",
});- All reads hit the Cache API / IndexedDB cache when available.
- Pass
forceRedownload: trueto refresh from the server even if cached. dispositionmirrors the URL helper if you need server-side content handling hints.
Prefetch Blobs for Offline Use
await client.document(documentId).blobs().prefetch([blobA, blobB], {
concurrency: 4,
forceRedownload: false,
});Prefetch downloads the bytes into the Cache API/IndexedDB store so subsequent read() calls succeed offline.
Inspect / Control the Upload Queue
const uploadsApi = client.document(documentId).blobs();
// Queue status (includes in-flight + pending items)
uploadsApi.uploads().forEach((task) => {
console.log(task.blobId, task.status);
});
// Pause/resume individual uploads
uploadsApi.pauseUpload(blobId);
uploadsApi.resumeUpload(blobId);
// Pause or resume everything for this document
uploadsApi.pauseAll();
uploadsApi.resumeAll();
// Global events (optional)
client.on("blobs:upload-progress", ([event]) => {
console.log(event.queueId, event.status, event.bytesTransferred);
});
client.on("blobs:upload-completed", ([event]) => {
console.log("done", event.queueId);
});
client.on("blobs:queue-drained", () => console.log("all uploads complete"));
// Adjust concurrency at runtime (minimum 1)
client.documents.setUploadConcurrency(5);
console.log("Current concurrency:", client.documents.getUploadConcurrency());Notes
- The client automatically computes base64 SHA-256 if not provided.
- Upload requires write-level permission (or admin/owner). Listing, metadata, and download require reader+.
- Uploads are queued when offline; the manager processes up to 2 uploads in parallel when network conditions allow. Pass
forceRedownloadtoread/prefetchfor fresh server bytes. - Events (
blobs:*) surface queue state for progress bars or toast notifications. - Client-side max upload size is not enforced in the SDK.
Offline Blob Storage
Blob storage is fully offline-aware:
Uploads while offline
await client.setNetworkMode("offline"); const { blobId } = await client .document(documentId) .blobs() .upload(new TextEncoder().encode("draft"), { filename: "draft.txt", contentType: "text/plain", }); // Inspect pending work console.log(client.document(documentId).blobs().uploads());- Bytes are written to the Cache API (browser) or kept in a short-lived in-memory map when caching is unavailable.
- Queue entries persist in IndexedDB so refreshes or reconnects continue uploading.
Reads when offline
const text = await client.document(documentId).blobs().read(blobId, { as: "text", }); // Works offline thanks to the cached bytesComing back online
await client.setNetworkMode("online"); // Queue processes automatically; listen to blobs:queue-drained for completionPrefetch before going offline
await client.document(documentId).blobs().prefetch(importantBlobIds);Prefetched blobs remain available for subsequent offline
read()calls.Retention
- Set
retainLocal: falseon upload options to drop cached bytes after success while leaving metadata intact. delete()removes queue entries, cached bytes, and server objects by default.
- Set
Service Worker Integration
BlobManager caches blob responses in the shared Cache API (js-bao-blobs:<appId>:<userId>) and now exposes helpers so UI code can coordinate with the service worker:
const blobs = client.documents.blobs(documentId);
if (!blobs.hasServiceWorkerControl()) {
console.warn("Service worker has not taken control yet");
}
const url = blobs.proxyUrl(blobId, {
disposition: "attachment",
attachmentFilename: "report.pdf",
});
imageElement.src = url;The client now posts these messages for you (including apiBaseUrl, cachePrefix, and the current token). To opt out (and send custom payloads) set serviceWorkerBridge: { enabled: false } when constructing JsBaoClient. Apps that never register a service worker simply ignore the bridge while continuing to use the shared Cache API for read() calls.
To support <img>/<video> tags and other non-authenticated fetches, add a service worker handler for requests on the same origin that match /app/{appId}/api/documents/{documentId}/blobs/{blobId}/download. The handler should swap the origin to the API host provided in the bridge payload, attach auth headers, fall back to the network when needed, and mirror the Cache API used by the main thread. A complete example:
// sw.js
const STATE = {
appId: null,
userId: null,
token: null,
cachePrefix: null,
globalAdminAppId: null,
apiBaseUrl: null,
};
// In-memory metadata example; persist to IndexedDB if you need SW restarts to keep state.
const BLOB_METADATA = new Map();
self.addEventListener("message", (event) => {
const { type, payload } = event.data || {};
if (!type || !payload) return;
if (type === "jsBao:init") {
STATE.appId = payload.appId ?? STATE.appId;
STATE.userId = payload.userId ?? STATE.userId;
STATE.cachePrefix = payload.blobs?.cachePrefix ?? STATE.cachePrefix;
STATE.globalAdminAppId = payload.globalAdminAppId ?? STATE.globalAdminAppId;
STATE.apiBaseUrl = payload.apiBaseUrl ?? STATE.apiBaseUrl;
STATE.token = payload.auth?.token ?? STATE.token;
} else if (type === "jsBao:tokenUpdated") {
STATE.token = payload.token ?? STATE.token;
}
});
self.addEventListener("fetch", (event) => {
const url = new URL(event.request.url);
const apiOrigin = STATE.apiBaseUrl ?? self.location.origin;
if (url.origin !== apiOrigin) return;
if (!STATE.appId) return;
if (!url.pathname.startsWith(`/app/${STATE.appId}/api/documents/`)) return;
if (!url.pathname.includes("/blobs/")) return;
if (!url.pathname.endsWith("/download")) return;
event.respondWith(handleProxy(event.request));
});
async function handleProxy(request) {
if (!STATE.appId) {
return fetch(request);
}
const requestUrl = new URL(request.url);
const apiBase = STATE.apiBaseUrl
? new URL(STATE.apiBaseUrl)
: new URL(requestUrl.origin);
const upstreamUrl = new URL(
`${requestUrl.pathname}${requestUrl.search}`,
apiBase.origin
);
const canonicalKey = buildCanonicalKey(apiBase.origin, requestUrl.pathname);
const metadata = extractDispositionMetadata(requestUrl);
if (metadata) {
BLOB_METADATA.set(canonicalKey, metadata);
}
const headers = new Headers(request.headers);
if (STATE.token) {
headers.set("Authorization", `Bearer ${STATE.token}`);
}
if (STATE.globalAdminAppId) {
headers.set("X-Global-Admin-App-Id", STATE.globalAdminAppId);
}
const upstreamRequest = new Request(upstreamUrl.toString(), {
method: request.method,
headers,
redirect: request.redirect,
cache: "no-store",
credentials: "omit",
mode: "cors",
});
const cacheName =
STATE.cachePrefix ?? `js-bao-blobs:${STATE.appId}:${STATE.userId}`;
const cache = await caches.open(cacheName);
const canonicalRequest = new Request(canonicalKey, { method: "GET" });
const effectiveMetadata = metadata ?? BLOB_METADATA.get(canonicalKey) ?? null;
const cached = await cache.match(canonicalRequest);
if (cached) {
return applyDisposition(cached, effectiveMetadata);
}
const upstream = await fetch(upstreamRequest);
if (!upstream.ok || request.method !== "GET") {
return upstream;
}
const sanitized = stripDisposition(upstream);
try {
await cache.put(canonicalRequest, sanitized.clone());
} catch (err) {
console.warn("[SW] Failed to write blob cache entry", err);
}
return applyDisposition(sanitized, effectiveMetadata);
}
function buildCanonicalKey(origin, pathname) {
return new URL(pathname, origin).toString();
}
function extractDispositionMetadata(url) {
const disposition = url.searchParams.get("disposition");
if (!disposition) return null;
const attachmentFilename =
url.searchParams.get("attachmentFilename") ?? undefined;
return { disposition, attachmentFilename };
}
function stripDisposition(response) {
const headers = new Headers(response.headers);
headers.delete("Content-Disposition");
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers,
});
}
function applyDisposition(response, metadata) {
if (!metadata) return response;
const headers = new Headers(response.headers);
headers.delete("Content-Disposition");
if (metadata.disposition === "inline") {
headers.set("Content-Disposition", "inline");
} else if (metadata.disposition === "attachment") {
const filename = metadata.attachmentFilename;
if (filename) {
headers.set(
"Content-Disposition",
`attachment; filename="${sanitizeAsciiFilename(
filename
)}"; filename*=UTF-8''${encodeRFC5987(filename)}`
);
} else {
headers.set("Content-Disposition", "attachment");
}
}
return new Response(response.body, {
status: response.status,
statusText: response.statusText,
headers,
});
}
function sanitizeAsciiFilename(filename) {
return filename.replace(/["\\]/g, "_");
}
function encodeRFC5987(value) {
return encodeURIComponent(value)
.replace(
/['()*]/g,
(char) => `%${char.charCodeAt(0).toString(16).toUpperCase()}`
)
.replace(/%(7C|60|5E)/g, (_, hex) => `%${hex.toLowerCase()}`);
}The canonical request uses only origin + pathname, so all disposition variants reuse the same cache entry. Metadata can live in memory (as shown) or in IndexedDB if you need to survive worker restarts. Because cached responses are stored without Content-Disposition, each hit reapplies headers based on the active request. Extend the sample with background eviction or cache versioning as needed.
Document Invitations
// Create an invitation
const invitation = await client.documents.createInvitation(
documentId,
"[email protected]",
"read-write" // 'owner' | 'read-write' | 'reader'
);
console.log("Invitation created:", invitation.invitationId);
// List all invitations for a document
const invitations = await client.documents.listInvitations(documentId);
// Update an invitation (changes permission)
const updatedInvitation = await client.documents.updateInvitation(
documentId,
"[email protected]",
"reader"
);
// Get specific invitation
const inv = await client.documents.getInvitation(
documentId,
"[email protected]"
);
// Delete an invitation
await client.documents.deleteInvitation(documentId, invitationId);
// Accept or decline (invitee)
await client.document(documentId).acceptInvitation();
await client.document(documentId).declineInvitation(invitationId);Pending Document Invitations (for the current user)
// List documents you’ve been invited to (pending, unexpired)
const pending = await client.me.pendingDocumentInvitations();
// Each item includes a best-effort `document` block with metadata (title, tags, createdAt, lastModified, createdBy)
for (const inv of pending) {
console.log(inv.document?.title, inv.document?.tags);
}Users
// Look up basic profile info for a user in the current app (cached like `me`)
const user = await client.users.getBasic("u01H...");
console.log(user.name, user.email, user.appRole);Invitation events
The client emits a unified invitation event for real-time invitation changes delivered over the WebSocket. Payload:
{ type: "invitation"; action: "created" | "updated" | "cancelled" | "declined"; invitationId; documentId; permission; title?; invitedBy?; invitedAt?; expiresAt?; document?: { title?; tags?; createdAt?; lastModified?; createdBy? } }
Example:
client.on("invitation", (evt) => {
const { action, documentId, invitationId, permission, title } = evt;
console.log("invitation event", action, documentId, invitationId, permission, title);
});Use this to refresh invitation lists or badge counts without polling.
Large Language Models (LLM)
// List available models
const { models, defaultModel } = await client.llm.models();
console.log("Default model:", defaultModel);
// Basic chat
const reply = await client.llm.chat({
// model is optional; uses server default when omitted
messages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Summarize: Collaborative editing with Yjs." },
],
temperature: 0.2,
max_tokens: 512,
reasoning: {
effort: "medium",
exclude: false,
},
});
console.log(reply.content);
// Chat with image attachment (base64-encoded PNG)
const imageBase64 = await loadImageAsBase64("/path/to/screenshot.png");
const imageReply = await client.llm.chat({
messages: [
{
role: "system",
content: "You analyze screenshots and respond helpfully.",
},
{ role: "user", content: "Describe what you see in this image." },
],
attachments: [
{
type: "image",
mime: "image/png",
base64: imageBase64,
},
],
});
console.log(imageReply.content);Gemini
// Text generation with optional structured output
const result = await client.gemini.generate({
messages: [
{
role: "system",
parts: [{ type: "text", text: "Reply in JSON with keys title and summary." }],
},
{
role: "user",
parts: [{ type: "text", text: "Summarize collaborative editing with Yjs." }],
},
],
structuredOutput: {
responseMimeType: "application/json",
responseJsonSchema: {
type: "object",
properties: {
title: { type: "string" },
summary: { type: "string" },
},
required: ["title", "summary"],
},
},
});
console.log(result.message.parts[0].text);
// Multimodal prompt with inline image data
const screenshot = await loadImageAsBase64("/path/to/screenshot.png");
await client.gemini.generate({
messages: [
{
role: "user",
parts: [
{ type: "text", text: "Describe this screenshot." },
{ type: "image", mimeType: "image/png", base64Data: screenshot },
],
},
],
});
// Raw passthrough using Google's payload schema
const rawPayload = {
contents: [
{
role: "user",
parts: [
{
text: "Summarize this JSON.",
},
{
inline_data: {
mimeType: "application/json",
data: someBase64Json,
},
},
],
},
],
};
const rawResponse = await client.gemini.generateRaw({
model: "models/gemini-2.5-flash",
body: rawPayload,
});
console.log(rawResponse.candidates?.[0]?.content);
// Token usage estimation
const tokens = await client.gemini.countTokens({
model: "models/gemini-2.5-flash",
messages: [{ role: "user", parts: [{ type: "text", text: "How many tokens?" }] }],
});
console.log(tokens.totalTokens);Gemini Configuration & Notes
client.geminiproxies to Cloudflare worker routes:GET /gemini/models→client.gemini.models()POST /gemini/generate→client.gemini.generate(...)POST /gemini/count-tokens→client.gemini.countTokens(...)POST /gemini/generate-raw→client.gemini.generateRaw({ model, body })These endpoints run server-side, so browser clients never see Google credentials.
- Deployments must configure a Gemini key in the worker environment (
GEMINI_API_KEY,GEMINI_API_TOKEN, orGEMINI_KEY). Optional helpers:GEMINI_DEFAULT_MODEL(e.g.models/gemini-2.5-flash)GEMINI_ALLOWED_MODELS(comma-separated allowlist such asmodels/gemini-2.5-flash,models/gemini-2.5-pro).
- Inline multimodal content uses base64
parts(text, image, file). Large file uploads can be added later via the Gemini Files API; V1 augments prompts with base64-inlined payloads up to ~25 MB. - Structured output leverages
generationConfig.responseMimeTypeplus optionalresponseSchema/responseJsonSchemaso Gemini can return deterministic JSON. The full server response is always available via therawfield when you need annotations or safety metadata. - Error handling surfaces
JsBaoErrorwithcode: "GEMINI_ERROR"; thedetailsproperty contains the raw upstream payload so you can log or render troubleshooting info. - See
.dev.local.examplefor sample environment values anddocs/gemini-direct-plan.mdfor architectural details.
Workflows
Workflows allow you to execute server-side, multi-step processes that can include LLM calls, delays, transformations, and more. The client provides APIs to start workflows, monitor their status, and receive real-time completion events.
Starting a Workflow
// Start a workflow with input data
const result = await client.workflows.start("my-workflow-key", {
message: "Hello world",
value: 42,
});
console.log("Run started:", result.runKey);
console.log("Run ID:", result.runId);
console.log("Status:", result.status);Start Options
const result = await client.workflows.start(
"my-workflow-key",
{ message: "Hello" },
{
// Provide a custom runKey for idempotency (auto-generated if omitted)
runKey: "unique-run-identifier",
// Associate the run with a document
contextDocId: "doc-123",
// Pass additional metadata
meta: { source: "user-action", priority: "high" },
}
);Duplicate Workflow Protection (Idempotency)
When you provide a runKey, the server ensures only one workflow run exists for that key. If you call start() again with the same runKey, the existing run is returned instead of creating a new one:
// First call creates the workflow
const first = await client.workflows.start(
"process-document",
{ docId: "abc" },
{ runKey: "process-abc-v1" }
);
console.log(first.existing); // false - new run created
// Second call with same runKey returns existing run
const second = await client.workflows.start(
"process-document",
{ docId: "abc" },
{ runKey: "process-abc-v1" }
);
console.log(second.existing); // true - existing run returned
console.log(second.runId === first.runId); // true - same runThis is useful for:
- Preventing duplicate processing when users click a button multiple times
- Safely retrying failed requests without creating duplicate work
- Implementing exactly-once semantics for critical operations
Checking Workflow Status
Poll the status of a running workflow:
const status = await client.workflows.getStatus("my-workflow-key", runKey);
console.log("Status:", status.status); // "running" | "complete" | "failed" | "terminated"
if (status.status === "complete") {
console.log("Output:", status.output);
}
if (status.status === "failed") {
console.log("Error:", status.error);
}Listening for Workflow Events
Subscribe to real-time workflow completion events via WebSocket:
// Listen for workflow status changes
client.on("workflowStatus", (event) => {
console.log("Workflow event:", event.workflowKey, event.runKey);
console.log("Status:", event.status); // "completed" | "failed" | "terminated"
if (event.status === "completed") {
console.log("Output:", event.output);
}
if (event.status === "failed") {
console.log("Error:", event.error);
}
});Note: To receive workflow events, you must have an active WebSocket connection. Opening a document establishes this connection:
// Open a document to establish WebSocket for receiving notifications
await client.documents.open(documentId);
// Now workflow events will be delivered
const result = await client.workflows.start("my-workflow", { data: "..." });Event Payload
interface WorkflowStatusEvent {
type: "workflowStatus";
workflowKey: string;
workflowId: string;
runKey: string;
runId: string;
status: "completed" | "failed" | "terminated";
output?: any;
error?: string;
contextDocId?: string;
}Listing Workflow Runs
View all workflow runs for the current user:
// List all runs
const runs = await client.workflows.listRuns();
console.log("Total runs:", runs.items.length);
runs.items.forEach((run) => {
console.log(run.runKey, run.status, run.createdAt);
});
// Filter by workflow
const filtered = await client.workflows.listRuns({
workflowKey: "my-workflow",
});
// Filter by status
const running = await client.workflows.listRuns({
status: "running",
});
// Pagination
const page1 = await client.workflows.listRuns({ limit: 10 });
if (page1.cursor) {
const page2 = await client.workflows.listRuns({
limit: 10,
cursor: page1.cursor,
});
}Run Record Fields
interface WorkflowRun {
runId: string;
runKey: string;
instanceId: string;
workflowId: string;
workflowKey: string;
revisionId: string;
contextDocId?: string;
status: string;
createdAt: string;
endedAt?: string;
}Terminating a Workflow
Cancel a running workflow:
const result = await client.workflows.terminate("my-workflow-key", runKey);
console.log("Terminated, final status:", result.status);Sending File Attachments (PDFs, Images)
Workflows can process files like PDFs and images. Files must be base64-encoded before sending:
/**
* Load a file and convert to base64.
* Works in browsers with fetch + FileReader or ArrayBuffer.
*/
async function loadFileAsBase64(url: string): Promise<string> {
const response = await fetch(url);
const arrayBuffer = await response.arrayBuffer();
const bytes = new Uint8Array(arrayBuffer);
let binary = "";
for (let i = 0; i < bytes.length; i++) {
binary += String.fromCharCode(bytes[i]);
}
return btoa(binary);
}
// Load a PDF and send to workflow
const pdfBase64 = await loadFileAsBase64("/path/to/document.pdf");
const result = await client.workflows.start("extract-pdf-data", {
attachments: [
{
data: pdfBase64,
type: "application/pdf",
},
],
});Loading from File Input (Browser)
async function fileToBase64(file: File): Promise<string> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
const dataUrl = reader.result as string;
// Remove the data URL prefix (e.g., "data:application/pdf;base64,")
const base64 = dataUrl.split(",")[1];
resolve(base64);
};
reader.onerror = reject;
reader.readAsDataURL(file);
});
}
// Handle file input
const fileInput = document.querySelector('input[type="file"]') as HTMLInputElement;
fileInput.addEventListener("change", async () => {
const file = fileInput.files?.[0];
if (!file) return;
const base64Data = await fileToBase64(file);
const result = await client.workflows.start("process-upload", {
attachments: [
{
data: base64Data,
type: file.type, // e.g., "image/png", "application/pdf"
filename: file.name,
},
],
});
});Loading from URL (Node.js)
import * as fs from "fs";
import * as path from "path";
function loadFileAsBase64Sync(filePath: string): string {
const buffer = fs.readFileSync(filePath);
return buffer.toString("base64");
}
const pdfPath = path.join(__dirname, "document.pdf");
const pdfBase64 = loadFileAsBase64Sync(pdfPath);
const result = await client.workflows.start("analyze-document", {
attachments: [
{
data: pdfBase64,
type: "application/pdf",
},
],
});Complete Example: PDF Processing Workflow
import { initializeClient } from "js-bao-wss-client";
async function processPDF(pdfUrl: string) {
const client = await initializeClient({
apiUrl: "https://api.example.com",
wsUrl: "wss://ws.example.com",
appId: "my-app",
token: "jwt-token",
databaseConfig: { type: "sqljs" },
});
// Set up event listener for completion
const completionPromise = new Promise<any>((resolve) => {
client.on("workflowStatus", (event) => {
if (event.status === "completed") {
resolve(event.output);
}
});
});
// Open a document to establish WebSocket connection
const { metadata } = await client.documents.create({ title: "temp" });
await client.documents.open(metadata.documentId);
// Load and encode the PDF
const response = await fetch(pdfUrl);
const arrayBuffer = await response.arrayBuffer();
const bytes = new Uint8Array(arrayBuffer);
let binary = "";
for (let i = 0; i < bytes.length; i++) {
binary += String.fromCharCode(bytes[i]);
}
const pdfBase64 = btoa(binary);
// Start the workflow
const result = await client.workflows.start("extract-pdf-data", {
attachments: [
{
data: pdfBase64,
type: "application/pdf",
},
],
});
console.log("Workflow started:", result.runKey);
// Wait for completion (or poll with getStatus)
const output = await completionPromise;
console.log("Extracted data:", output);
// Cleanup
await client.documents.delete(metadata.documentId);
await client.destroy();
return output;
}Polling for Completion
If you prefer polling over WebSocket events:
async function waitForCompletion(
client: JsBaoClient,
workflowKey: string,
runKey: string,
timeoutMs = 60000,
intervalMs = 2000
): Promise<any> {
const startTime = Date.now();
while (Date.now() - startTime < timeoutMs) {
const status = await client.workflows.getStatus(workflowKey, runKey);
if (status.status === "complete") {
return status.output;
}
if (status.status === "failed") {
throw new Error(`Workflow failed: ${status.error}`);
}
if (status.status === "terminated") {
throw new Error("Workflow was terminated");
}
// Still running, wait and retry
await new Promise((r) => setTimeout(r, intervalMs));
}
throw new Error("Workflow timed out");
}
// Usage
const result = await client.workflows.start("my-workflow", { data: "..." });
const output = await waitForCompletion(client, "my-workflow", result.runKey);Integrations API
Proxy HTTP calls through the tenant-specific integrations defined in the admin UI:
const response = await client.integrations.call({
integrationKey: "weather-api",
method: "GET",
path: "/current",
query: { city: "San Francisco" },
headers: { "X-Debug": "true" },
});
console.log(response.status); // Upstream status code
console.log(response.body); // JSON returned by the provider
console.log(response.traceId); // Proxy trace id (mirrors admin logs)
console.log(response.durationMs); // Milliseconds spent in the workerintegrationKey– slug chosen in the admin UI. Keys are per-app.method,path,query,headers,body– forwarded exactly as provided, but the worker enforces the integration’s allowlisted methods/paths, forwarded headers/query params, max body size, and timeout.- Success responses include the upstream
status,headers,body, optionaltraceId, anddurationMs.
Errors
client.integrations.call throws `JsBa
