@otheracc/sdk
v0.4.4
Published
Worker SDK for otheracc — lease, profile download, and launch-spec generation for cloak/camoufox browsers.
Downloads
699
Readme
@otheracc/sdk (Node.js)
Worker-side SDK for otheracc. The headline API is
client.runSession(cfg, fn) — the SDK acquires a lease, materializes
the user-data-dir from the captured profile, asks your injected
launcher to start the browser, runs the periodic probe rider, then
tears everything down (auto-uploading the captured profile on a
successful release). Your worker fn just gets a BrowserContext and
does business logic.
Install
pnpm add @otheracc/sdk
# or: npm install @otheracc/sdkNode ≥ 22 required (uses global fetch, await using). For
browser launch you also need one of:
cloakbrowser(npm) — Chromium-based stealthcamoufox-js(npm) — Firefox-based stealth
These are not SDK dependencies — the SDK deliberately doesn't
launch the browser for you (see "Why no sdk.launch()" below).
Quick start
import { OtheraccClient, cloakLauncher } from '@otheracc/sdk';
const client = new OtheraccClient({
baseUrl: 'https://otheracc.internal:3100',
workerId: 'crawler-7',
});
const result = await client.runSession(
{
request: {
platform: 'doubao',
idempotencyKey: `task-${taskId}`,
},
launcher: cloakLauncher,
// optional — defaults shown:
// probe: { onStart: false, onEnd: true },
// checkpoint: { onEnd: false }, // success-path auto-upload covers it
// userDataRoot: os.tmpdir(),
},
async ({ context, lease, lastProbe }) => {
const page = await context.newPage();
await page.goto('https://www.doubao.com');
// ... business logic ...
return await page.title();
// Return → SDK runs final probe → close ctx → release('success')
// → auto-upload profile → rm -rf udd
},
);
if (result === undefined) {
// No eligible account matched the selector. `runSession` did not
// create a lease and there's nothing to release.
}That's the whole worker shape. No manual acquire / downloadProfile
/ release, no signal handlers, no profile re-upload call.
cloakLauncher is a bundled launcher around cloakbrowser. Install
the driver explicitly — cloakbrowser is an optional peer
dependency so workers that only need acquire / introspection don't
pay the ~200 MB Chromium download:
npm install @otheracc/sdk cloakbrowserNeed headless? Use the factory:
import { createCloakLauncher } from '@otheracc/sdk';
const launcher = createCloakLauncher({ headless: true });Need camoufox or a custom stealth driver? Inject your own
LauncherFn — see "Custom launcher" below.
Custom launcher
The SDK only bundles cloakLauncher for the cloak engine. Workers
that need camoufox / playwright-extra / raw CDP write their own
LauncherFn:
import type { BrowserContext } from 'playwright-core';
import { OtheraccClient, type LauncherFn } from '@otheracc/sdk';
import { launchPersistentContext } from 'cloakbrowser';
const customLauncher: LauncherFn = async (spec, { userDataDir }) => {
if (spec.engine !== 'cloak') {
throw new Error(`unsupported engine: ${spec.engine}`);
}
return (await launchPersistentContext({
userDataDir,
headless: true,
viewport: spec.viewport,
args: [...spec.args, '--my-custom-flag'],
...(spec.proxy !== null && { proxy: spec.proxy }),
})) as BrowserContext;
};
await client.runSession({ request, launcher: customLauncher }, fn);API reference
new OtheraccClient(opts)
| option | type | default | notes |
|---|---|---|---|
| baseUrl | string | (required) | https://host:port of the otheracc API |
| workerId | string | — | Default worker identity. Attached as x-operator: worker:<id> |
| timeoutMs | number | 10000 | Per-request timeout for JSON calls |
| downloadTimeoutMs | number | 300000 | Timeout for downloadProfile (5 min) |
| fetch | typeof fetch | globalThis.fetch | Override for tests / custom agents |
| leaseOptions | LeaseOptions | {} | Propagated to every Lease the client produces |
client.runSession(cfg, fn) → Promise<T | undefined>
The recommended entry point. Resolves to:
fn's return value on a clean runundefinedif no account matched the selector (no lease was created — nothing to release)
Throws whatever fn threw, after marking the lease as 'error',
running teardown, and releasing without auto-upload. Throws
IdempotencyKeyExhaustedError (etc.) directly when acquire itself
fails.
interface RunSessionConfig {
/** Forwarded to client.acquire. */
request: AcquireRequest;
/** Worker-supplied browser launcher. See `LauncherFn` below. */
launcher: LauncherFn;
/** Probe rider config. `false` disables entirely.
* Default: `{ onStart: false, onEnd: true }`. */
probe?: ProbeConfig | false;
/** Profile re-upload cadence. `false` disables periodic and onEnd.
* Default: `{ onEnd: false }` — the success-path auto-upload on
* release already covers end-of-session. */
checkpoint?: CheckpointConfig | false;
/** Where to materialize the per-session user-data-dir.
* Default: `os.tmpdir()`. Path is
* `<userDataRoot>/otheracc-<accountId>-<engine>/`. */
userDataRoot?: string;
/** Skip the cache version check, force re-download from S3. */
forceFreshProfile?: boolean;
/** Don't `rm -rf` the udd on exit (debugging). */
keepUserDataDir?: boolean;
}
interface SessionContext {
/** Live BrowserContext from your launcher. */
context: BrowserContext;
/** The lease handle. Direct API still works
* (`markOutcome` / `report` / `release`) — runSession just
* orchestrates the lifecycle on top. */
lease: Lease;
/** Local udd path the SDK materialized + auto-uploads on success. */
userDataDir: string;
/** Latest probe outcome, refreshed by the rider.
* `null` until the first probe fires. */
lastProbe: () => { matched: boolean; at: Date } | null;
}
type SessionFn<T> = (s: SessionContext) => Promise<T>;LauncherFn
type LauncherFn = (
spec: LaunchSpec,
opts: { userDataDir: string },
) => Promise<BrowserContext>;Receives the engine-tagged LaunchSpec produced by buildLaunchSpec.
Dispatch on spec.engine:
engine: 'cloak'(covers'chrome') — Chromium-family. Spreadviewport/args/proxy(structured{server, username, password}—cloakbrowserspeaks SOCKS5-auth natively).engine: 'camoufox'— Firefox fork. Usewindowtuple + rawproxyURL string. Playwright-Firefox can't speak SOCKS5-auth, so asocks5://user:pass@…upstream needs a local unauthenticated tunnel; rewriteproxytosocks5://127.0.0.1:<tunnelPort>before handing toCamoufox({...}). (The otheracc server'ssrc/browser/camoufox/socks-tunnel.tsis a known-good impl you can port.)
ProbeConfig / CheckpointConfig
Two independent timers wired into runSession. Both run inside
the worker process; each serves a different purpose.
interface ProbeConfig {
/** Periodic re-probe interval (ms). 0 / undefined = disabled. */
intervalMs?: number;
/** Probe immediately after launcher returns. Useful to fail fast
* if the captured profile no longer carries a logged-in session. */
onStart?: boolean;
/** Probe right before browser close. Records the final login
* state for the dashboard / scheduler. Default true. */
onEnd?: boolean;
}
interface CheckpointConfig {
/** Periodic profile re-upload interval (ms). Default disabled. */
intervalMs?: number;
/** Re-upload right before release. Default false — the auto-upload
* on `release('success')` already covers end-of-session. Only
* meaningful for non-success teardown paths where someone wants a
* manual-recovery snapshot. */
onEnd?: boolean;
}What the probe rider actually does each tick:
- Reads the keys named by the account's
success_probeout of the liveBrowserContext— listed cookies,localStoragekeys, anddom_selectors(one boolean per selector,trueif$selectorresolves on the active page). - POSTs the raw observation to
/v1/lease/<token>/probe-result. - Server evaluates against the configured rules (it is the single
source of truth) and updates
last_probe_at+LoginValidcondition on the account. - SDK stores
{matched, at}intolastProbe().
When the account has no success_probe configured, the rider is a
no-op (nothing meaningful to evaluate). Probe failures (network blip,
page closed mid-eval) are non-fatal and swallowed — the next tick will
try again.
What the checkpoint does each tick: re-runs client.uploadProfile
against the live udd, server bumps account.profile_version so the
next acquire re-downloads. Only worth enabling for long sessions
where you want intermediate checkpoints in case the worker dies before
clean release.
Lease handle
Available as s.lease inside the worker fn, or directly from
client.acquire(...) if you're using the lower-level API.
| member | notes |
|---|---|
| token / epoch / accountId / engine / mode | Current state |
| profile / proxy / credentials / successProbe | Resource bundles; may be null |
| expireAt / maxExpireAt / remainingBudgetMs | Deadlines |
| reused / isClosed | Lifecycle flags |
| snapshot() | Plain LeaseSnapshot for serialization |
| markOutcome(outcome) / getOutcome() | Record / read disposition (success / error / …) |
| attachUserDataDir(udd) | Hand the SDK a udd so a successful release auto-uploads. runSession already calls this — direct callers wire it explicitly. |
| renew(ttlMs?) | Manual renew. Usually unnecessary — long-mode auto-heartbeats. |
| report(kind, detail?) | Out-of-band signal (captcha, banned, …) |
| release(outcome?) | Explicit release. Idempotent. |
| [Symbol.asyncDispose] | await using hook — releases with the recorded outcome. |
Probe results from inside the worker fn
async ({ context, lease, lastProbe }) => {
// After onStart probe (when probe.onStart === true), check before
// doing anything risky.
const start = lastProbe();
if (start && !start.matched) {
lease.report('login_expired');
lease.markOutcome('error');
throw new Error('captured profile no longer logged in');
}
// ... business logic ...
}You can also call the probe directly from inside the fn at decision points the rider's cadence wouldn't catch:
import { runProbeOnce } from '@otheracc/sdk';
const observed = await runProbeOnce(client, lease, context);
if (!observed.matched) { /* … */ }Auto profile checkpoint
When lease.attachUserDataDir(udd) has been called (runSession
does this for you on launcher return) and the lease later releases
with success outcome (the default — await using clean exit,
or release() without an explicit non-success outcome), the SDK:
- Tars + gzips
udd(skipping the.profile_versioncache marker) - Streams it to
PUT /v1/lease/<token>/profile.tar.gz— server KMS-encrypts, writes to S3, bumpsaccount.profile_version - Then posts
/releaseas usual
Failure outcomes (error / banned / timeout) skip the upload
on purpose — a worker that hit captcha / antibot shouldn't checkpoint
its potentially-poisoned cookies forward into the next acquire.
runSession translates fn throws into markOutcome('error') for
exactly this reason.
If the upload itself fails (S3 down, body too large), the SDK logs
via leaseOptions.logger.warn(...) and proceeds with /release
regardless — the wake proxy must come back even when checkpointing
didn't.
Lower-level building blocks
runSession is layered on top of these. Reach for them when you need
to compose differently — e.g. handing a LeaseSnapshot across a
process boundary, or running multiple browsers off one lease.
client.acquire(req) → Promise<Lease | null>
Returns null when no eligible account matches. Returns a Lease
with reused: true when the idempotencyKey already had a live
lease — treat identically to a fresh lease.
client.withLease(req, fn) → Promise<T | undefined>
runSession minus the browser orchestration. Useful when the worker
isn't actually launching a browser — credential introspection,
profile-only reads, etc.
const result = await client.withLease(
{ platform: 'doubao', idempotencyKey: 'task-42' },
async (lease) => {
// ... use lease.profile / lease.credentials ...
},
);client.downloadProfile(lease, destDir, opts?) → Promise<boolean>
Streams the server-side-decrypted profile tarball into destDir
(creates if missing). Returns false without an HTTP call when
lease.profile === null (account never captured — launch with an
empty dir).
Handles auth implicitly — the lease token in the URL is the credential.
Honors a per-pod cache marker (.profile_version) so repeat downloads
of the same profile_version against the same dir are no-ops; pass
{ forceFresh: true } to bypass.
client.uploadProfile(snapOrLease, srcDir)
Tars + gzips srcDir and streams it to the upload endpoint. Server
KMS-encrypts + writes to S3 + bumps profile_version. The
attachUserDataDir auto-upload calls this internally.
buildLaunchSpec(source, opts) → LaunchSpec
Engine-tagged description of how to launch the browser:
engine: 'cloak'(also covers'chrome') → Chromium-family; structuredproxy+argswith--window-sizeengine: 'camoufox'→ Firefox fork;windowtuple + raw proxy URL string
Dispatch via spec.engine. Accepts either a live Lease or its
LeaseSnapshot for cross-process handoff.
Bundled launchers
The SDK ships canonical launchers for engines we run in production
(currently just cloak; cloakLauncher is the only export). Drop in
the default and runSession's launcher slot is a one-liner.
We considered going further — client.launch() that owned the whole
spawn — and decided against, for the same reasons we kept
runSession's launcher field worker-supplied:
- Transitive dependency bloat. Chromium / Firefox stealth forks
weigh hundreds of MB. Bundled launchers ship as optional peer
deps — workers that only call
acquire(dashboards, audits) don't pay the install cost.npm install @otheracc/sdkalone doesn't pull cloakbrowser. - Driver choice belongs to the worker. Want
playwright-extrastealth stacked on top? Raw Chromium via CDP? Your own cloakbrowser wrapper? Inject a customLauncherFn(see "Custom launcher" above) and the SDK gets out of the way. - Engine heterogeneity. cloakbrowser launches Chromium via
Playwright; camoufox-js wraps a custom Firefox binary with a
different invocation shape. Each launcher lives in its own module
so adding
camoufoxLauncherlater doesn't refactorcloakLauncher.
runSession's launcher field stays mandatory — cloakLauncher is
just an import-and-go default.
Error handling
The SDK maps the server's RFC 7807 Problem Details bodies to a typed hierarchy:
import {
OtheraccError, // base class
OtheraccNetworkError, // transport failure (always retryable)
IdempotencyKeyExhaustedError, // 409 — key already pointed at a released lease
EpochMismatchError, // 409 — concurrent renew/release won the race
LeaseInvalidatedError, // 409 — server force-released (banned, reaper, …)
LeaseMaxDurationExceededError, // 409 — absolute deadline hit on renew
LeaseExpiredError, // 409 — short-mode lease past expireAt without renew
LeaseNotFoundError, // 404 — lease gone (reaper cleaned up)
LeaseClosedError, // local — reused a disposed handle
ValidationError, // 400 — request body failed server validation
} from '@otheracc/sdk';Every OtheraccError carries status, type (problem URI),
body (parsed response), and a retryable computed property. The
SDK already handles small retries internally for report and
release (network errors only).
Long-mode leases auto-heartbeat at TTL/3 with a [1s, 3s] backoff
on transient renew failures (configurable via
leaseOptions.heartbeatBackoffMs). On terminal renew errors
(LeaseInvalidatedError, LeaseMaxDurationExceededError,
EpochMismatchError) the lease is force-closed and the optional
leaseOptions.onEvicted(info) handler fires. Inside runSession,
that surfaces as the worker fn's await page.… rejecting with
LeaseClosedError on the next call that touches the (already
released) lease.
Lease lifecycle reminder
runSession matches every successful acquire with a release —
including the throw / SIGTERM paths. If you reach for the lower-level
API, the rule still holds: every successful acquire must
eventually be matched by a release. The server's reaper will
force-release stale leases, but TTL is typically minutes — don't rely
on it as the normal path.
await using lease = await client.acquire(...);
if (!lease) return;
try {
// work; success is the default outcome
} catch (err) {
lease.markOutcome('error'); // or 'banned' for risk-positive cases
throw err;
}
// `await using` dispose runs lease.release() here, with the recorded outcome.Lease modes: short vs long
Both modes share the same per-renew TTL (default 2 min, capped at 10 min). The difference is cumulative lifetime cap and who heartbeats:
| | short (default) | long |
| ------------------------ | ------------------------------ | ------------------------------------- |
| Per-renew TTL default | 2 min | 2 min |
| Per-renew TTL max | 10 min | 10 min |
| Total lease lifetime | 10 min (hard cap) | 2 hr (hard cap) |
| SDK auto-heartbeat | ❌ — worker manages renew | ✓ every TTL/3 (~40 s) until 2 hr cap |
Pick the mode by expected task duration:
// Task < 10 minutes — default mode. SDK does NOT auto-renew. Either
// finish within the initial 2-min TTL, request the full 10 min upfront,
// or call lease.renew() yourself before TTL elapses.
runSession({ request: { platform: 'doubao', mode: 'short' /* or omit */, requestedTtlMs: 600_000 }, ... });
// Task > 10 min, ≤ 2 hr — SDK auto-heartbeats in the background.
runSession({ request: { platform: 'doubao', mode: 'long' }, ... });Beyond max_expire_at the lease can no longer be renewed; long-mode
SDK fires onEvicted({ reason: 'max-duration-exceeded' }).
Mode does NOT affect account selection, probe / cooldown / abnormal recovery, proxy stickiness, or any other lease-lifecycle concern — both modes share the same downstream semantics.
Development
pnpm install
pnpm typecheck
pnpm test:run
pnpm buildThe SDK tree is self-contained under sdks/nodejs/ in the otheracc
monorepo.
