@titon-network/atlas-sdk
v0.1.2
Published
TypeScript SDK for Atlas — TON-native threshold-BLS key backbone. Admit verifiers, receive GroupKeySync fan-outs, register automaton pkShares, bootstrap + rotate group keys. **TESTNET ONLY — TSA AI AUDITED.**
Downloads
213
Maintainers
Readme
@titon-network/atlas-sdk
Audience: Titon dev team building the next sibling product against Atlas. If you're working in
titon/fortuna/,titon/phoebe/, or scaffolding a brand-newtiton/<product>/that needs threshold-BLS verification, this SDK is your contract handle, testing fixture, event decoder, and BLS off-chain helper rolled into one.Not for end-user dapp authors — they consume the product SDKs (e.g.
@titon-network/fortuna-sdkfor VRF). Atlas is invisible infrastructure to them.
TypeScript surface for Atlas — TON's threshold-BLS key backbone. One contract that owns the current group public key, the operator pkShare registry, timelocked rotation, and the verifier admission + GroupKeySync fan-out surface. Every Titon product that does threshold signatures consumes Atlas instead of re-implementing these primitives.
Think of Atlas as EigenLayer for threshold-BLS, at the key-registry layer:
- ForgeTON stakes the operators. Atlas mirrors the operator lifecycle via
AutomatonSyncfan-out. - Atlas owns the
groupPk+ pkShare registry. Each product admits itself viaSetVerifierand caches the key. - Each product verifies sigs against the cached key and decides its own slashing reasons (sent through ForgeTON, not Atlas).
Atlas never slashes. Slashing is the consuming product's decision.
# In a new product repo, this SDK lives at file:../atlas/sdk via the workspace's
# zero-friction sibling-repo convention (per workspace CLAUDE.md):
pnpm add file:../atlas/sdk@ton/core is a peer dependency — bring your own version (≥ 0.63.0).
The Titon-team product-build playbook
You're starting titon/<product>/. Four touch-points:
1. Verifier receiver — <product>/contracts/<product>.tolk
Implement a 0x51 GroupKeySync receiver that caches the key. Sender-pinned to storage.atlas, immutable post-deploy. Copy the wire struct from examples/verifier-template.tolk — do not retype. Bytewise drift = silent CellUnderflow on every fan-out land.
struct (0x00000051) GroupKeySync {
groupId: uint8
groupPk: bits384
groupEpoch: uint32
threshold: uint8
memberCount: uint8
}
fun handleGroupKeySync(msg: GroupKeySync, sender: address) {
assert (sender == storage.atlas) throw E_NOT_ATLAS; // #1 footgun if missing
assert (msg.groupId == 0) throw E_BAD_GROUP; // v1 single-group
storage.groupPk = msg.groupPk;
storage.groupEpoch = msg.groupEpoch;
storage.threshold = msg.threshold;
storage.memberCount = msg.memberCount;
storage.save();
}The full template covers SyncRequest bootstrap pull + a stub handleFulfill that runs BLS_VERIFY(storage.groupPk, msg, sig) with the receiver-safety banner up top. Or run npx atlas generate verifier --type groupkey-cache.
2. Atlas pin — same file
struct AtlasStorage { // your product's storage, abbreviated
schemaVersion: uint8
atlas: address // pinned at deploy; immutable post-deploy
forgeton: address // ditto if your product slashes
groupPk: bits384
groupEpoch: uint32
threshold: uint8
memberCount: uint8
// ...product-specific fields
}A mutable atlas field is a spoof vector. Only ever change it via your product's own 3-step timelocked code upgrade.
3. Slash through ForgeTON, not Atlas
If your product detects a fault (missed fulfill, stale feed, invalid VRF), send Slash (op 0x14) directly to ForgeTON with your product's own reason: uint32 namespace:
struct (0x00000014) Slash {
automaton: address
reason: uint32 // YOUR product's namespace (e.g. 1 = INVALID_VRF)
ctx: uint64 // YOUR product's context (e.g. requestId)
amount: coins // capped at consumer.maxSlashPerEvent
}Reference: titon/kronos/contracts/kronos-registry.tolk (REASON_MISSED_EXECUTION = 1). Atlas is out of the slash path entirely.
4. Tests — <product>/tests/Integration.spec.ts
import { Blockchain } from '@ton/sandbox';
import { deployAtlasFixture } from '@titon-network/atlas-sdk/testing';
import { ForgeTON, loadForgetonCode } from '@titon-network/forgeton-sdk';
const blockchain = await Blockchain.create();
const { atlas, admitVerifier, rotate, groupSk, groupPk } =
await deployAtlasFixture(blockchain);
// Deploy your product, admit it as a verifier:
await admitVerifier({ contract: myProduct.address });
// Exercise rotation — your product's 0x51 receiver should now hold newPk at newEpoch
const { sk: newSk, pk: newPk, newEpoch } = await rotate();See atlas/tests/Integration.spec.ts for the cross-contract test pattern with a real ForgeTON.
That's the entire integration. Key management, rotation, operator set drift, fan-out — handled by Atlas + the pool. You write the verifier receiver + your product-specific fulfill logic.
Public surface
┌─────────────────────────────────────────────────────────────┐
│ explainError AtlasError summarizeTx formatTxSummary │ diagnostics
├─────────────────────────────────────────────────────────────┤
│ decodeEvent decodeEvents tryDecodeEvent │ events
├─────────────────────────────────────────────────────────────┤
│ Atlas newAtlas ATLAS_DEFAULTS │ contract + factory
├─────────────────────────────────────────────────────────────┤
│ generateGroupKey signMessage aggregateSignatures │ BLS helpers (off-chain aggregator)
├─────────────────────────────────────────────────────────────┤
│ deployAtlasFixture │ testing/ subpath
├─────────────────────────────────────────────────────────────┤
│ OP ERR loadAtlasCode ATLAS_TESTNET │ constants + artifacts
└─────────────────────────────────────────────────────────────┘No façade, no fluent builder — the surface is small by design. Atlas has one role (threshold-BLS key backbone); integrate at the ABI level.
Fan-out cost — sizing your tests' value: parameters
| Item | Default | Updatable? | Paid for |
|------|---------|-----------|----------|
| verifierSyncValue | 0.05 TON | via UpdateConfig | Per-verifier forward on fan-out (Atlas → your verifier receiver) |
| minStorageReserve | 0.1 TON | via UpdateConfig | Atlas's own rent floor (enforced before any outbound) |
| minGasForRegister | 0.05 TON | via UpdateConfig | RegisterBlsShare / DeregisterBlsShare gas floor |
| minGasForSyncRequest | 0.05 TON | via UpdateConfig | Verifier-inbound SyncRequest gas floor |
| minGasForRotation | 0.1 TON | deploy-only (code upgrade to change) | Base overhead on Execute rotation (plus per-verifier fan-out) |
On rotation Execute, the value Atlas needs is minGasForRotation + verifierCount * verifierSyncValue. Always size via atlas.getRequiredRotationValue() — never hardcode (an owner-side UpdateConfig would silently break you).
Quickstart — typical flow when bringing up a new product locally
Spin up Atlas in your tests
import { Blockchain } from '@ton/sandbox';
import { deployAtlasFixture } from '@titon-network/atlas-sdk/testing';
const blockchain = await Blockchain.create();
const { atlas, admitVerifier, rotate, automatonSync, bootstrapSoloGroup, groupSk, groupPk } =
await deployAtlasFixture(blockchain);
// atlas is live: groupPk published, one operator registered. Ready for admit + fan-out tests.Deploy a fresh Atlas (production / testnet)
This is rarely needed during product development — one Atlas instance serves N products. Here for completeness:
import { TonClient } from '@ton/ton';
import { toNano } from '@ton/core';
import { newAtlas } from '@titon-network/atlas-sdk';
const tonClient = new TonClient({ endpoint: 'https://testnet.toncenter.com/api/v2/jsonRPC' });
const atlas = tonClient.open(newAtlas({
owner: ownerAddress,
forgeton: forgetonPoolAddress, // pinned at deploy; repoint via code upgrade
}));
await atlas.sendDeploy(ownerSender, toNano('1'));
console.log('Atlas deployed at', atlas.address.toString());Post-deploy:
- Off-chain DKG ceremony across the automaton set → aggregate
groupPk(G1, 48 bytes) + per-automatonsk_i+pkShare_i. For dev / demos, usegenerateGroupKey()(solo mode below). - Publish
groupPkviaatlas.sendPublishGroupKey(...). - Each automaton calls
RegisterBlsShareafter itsAutomatonSyncmirrors in from ForgeTON. - Admit your product(s) via
atlas.sendSetVerifier(owner, { contract: myProduct.address, isActive: true, subscribedGroups: 1n }).
Solo-oracle mode (tests + dev / demo)
import { generateGroupKey, signMessage } from '@titon-network/atlas-sdk';
const { sk, pk } = generateGroupKey(); // 32-byte sk, 48-byte G1 pk
// Publish pk as the groupPk AND register the same key as the sole pkShare.
// Sign with signMessage(sk, msg); the BLS_DST_G2_POP DST is bound in.Not for mainnet value. One secret key = one point of failure. Standard pattern in every Titon product's unit-test suite. See skills/atlas-deploy.md for a real DKG bootstrap.
Rotate the group key
Owner, 3-step, 24h timelock:
await atlas.sendPause(owner, { value: toNano('0.05') });
await atlas.sendProposeGroupKeyRotation(owner, {
value: toNano('0.1'),
newGroupPk: Buffer.from(newPk),
newThreshold,
newMemberCount,
delaySeconds: 24 * 3600,
});
// ...24h later...
const value = await atlas.getRequiredRotationValue(); // minGasForRotation + verifierCount * verifierSyncValue
await atlas.sendExecuteGroupKeyRotation(owner, { value });
await atlas.sendUnpause(owner, { value: toNano('0.05') });Execute fans out GroupKeySync to every admitted + subscribed verifier. Operators must re-register their new-epoch pkShares before the group accepts fulfills again — the dense operator index is cleared on rotation.
Decode Atlas events (during integration tests + telemetry)
import { decodeEvents, AtlasEvent } from '@titon-network/atlas-sdk';
for (const tx of result.transactions) {
const externalOuts = [...tx.outMessages.values()]
.filter((m) => m.info.type === 'external-out')
.map((m) => m.body);
for (const ev of decodeEvents(externalOuts)) {
switch (ev.kind) {
case 'GroupKeyPublished':
console.log(`group ${ev.groupId} bootstrapped at epoch ${ev.groupEpoch}`);
break;
case 'VerifierSynced':
console.log(`fan-out landed at ${ev.verifier}`);
break;
case 'OperatorActivationChanged':
console.log(`${ev.automaton} isActive=${ev.isActive} cause=${ev.cause}`);
break;
}
}
}Or use the one-shot summarizer:
import { summarizeTxs, formatTxSummary } from '@titon-network/atlas-sdk';
for (const s of summarizeTxs(result.transactions)) {
console.log(formatTxSummary(s));
// [ok] events: GroupKeyPublished, VerifierSynced
// [fail] exit 220 RotationRequiresPause — ProposeGroupKeyRotation requires …
}Interpret exit codes during cross-contract tests
import { explainError } from '@titon-network/atlas-sdk';
const e = explainError(220);
// {
// code: 220,
// origin: 'atlas',
// name: 'RotationRequiresPause',
// message: 'ProposeGroupKeyRotation / ExecuteGroupKeyRotation requires the contract to be paused.',
// hint: 'Owner must sendPause first. Pause prevents new operator ops + request flows from racing the epoch bump.',
// }Atlas surfaces TVM's common codes too (9 CellUnderflow, 13 OutOfGas, 37 NotEnoughTon, 0xFFFF UnknownOpcode).
CLI
The SDK ships a small atlas CLI for local introspection + product scaffolding. All commands accept --json for machine-readable output.
Scaffolding
$ npx atlas init # write ATLAS.md (agent context for your product repo)
$ npx atlas generate verifier --type groupkey-cache # scaffold a Tolk verifier
$ npx atlas generate verifier --out myVerifier.tolk --forceIntrospection (no network)
$ atlas explain 220
exit 220 (atlas): RotationRequiresPause
ProposeGroupKeyRotation / ExecuteGroupKeyRotation requires the contract to be paused.
hint: Owner must sendPause first. Pause prevents new operator ops …
$ atlas schema
SDK expects:
ATLAS_STORAGE_VERSION: 1
...
BLS:
pubkey bytes: 48 (G1)
sig bytes: 96 (G2)
DST: BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_POP_
$ atlas hash
hex: <bundled artifact code hash>
$ atlas decode <hex-boc> # decode an emitted AtlasEvent cellLive (needs pnpm add @ton/ton)
$ atlas info <atlas-addr> --testnet
$ atlas estimate fanout --atlas <addr> --testnet
$ atlas estimate rotation --atlas <addr> --testnet
$ atlas verify --testnet # drift-check SDK vs canonical deployThe BLS ciphersuite (your product's aggregator side)
Atlas's operator pkShare registry + GroupKeySync carry G1 points under min-pk:
- Pubkeys (G1): 48 bytes compressed
- Signatures (G2, used by your fulfill receiver): 96 bytes compressed
- Domain-separation tag:
BLS_SIG_BLS12381G2_XMD:SHA-256_SSWU_RO_POP_
@noble/curves/bls12-381's default DST is _NUL_ — that does NOT match TVM. Off-chain signers must pass BLS_DST_G2_POP (exported from @titon-network/atlas-sdk) explicitly to bls.longSignatures.hash(). The SDK's signMessage(sk, msg) helper binds the DST in for you.
import { signMessage } from '@titon-network/atlas-sdk';
const sig = signMessage(sk, msgBytes); // 96-byte G2 compressedWhat's where
| File | Purpose |
|------|---------|
| src/contracts/Atlas.ts | Atlas class — ABI wrapper with send/get methods, SchemaDriftError, validateConfig, validateSetVerifier, validateAgainstLive, gas estimators |
| src/opcodes.ts | OP + ERR constants + schema versions + BLS ciphersuite constants + protocol limits |
| src/errors.ts | explainError(code) + AtlasError — covers Atlas (100-249, 333) and common TVM codes |
| src/events/ | Typed AtlasEvent union + decodeEvent / decodeEvents / tryDecodeEvent |
| src/factory.ts | newAtlas({ owner, forgeton }) — one-line deploy handle |
| src/solo.ts | generateGroupKey + signMessage + aggregateSignatures — BLS helpers with the correct DST bound in |
| src/testing/ | deployAtlasFixture — one-line sandbox setup. Imported via @titon-network/atlas-sdk/testing subpath |
| src/diagnostics.ts | summarizeTx / formatTxSummary — collapse a Transaction to its useful fields |
| src/artifacts/loader.ts | loadAtlasCode() + ATLAS_CODE_HASH |
| src/deployments.ts | ATLAS_TESTNET / ATLAS_MAINNET canonical addresses + assertDeployment() loud-error helper |
| src/cli.ts | atlas CLI |
| ERRORS.md | Flat Markdown table of every exit code. Generated. |
| OPCODES.md | Flat Markdown table of every wire opcode. Generated. |
| llms.txt | Single-page AI-assistant context |
| AGENTS.md | Surface map + skills index |
| templates/agent-context.md | Dense AI primer — generated into a product repo by npx atlas init |
| examples/verifier-template.tolk | Canonical starting point for a new product's 0x51 receiver |
| skills/ | Seven persona-grouped task playbooks |
Schema drift
Persistent structs (AtlasStorage, AtlasConfig, VerifierRegistry, VerifierInfo, OperatorInfo, GroupBlob, OperatorShare) carry a schemaVersion: uint8 first field. The SDK's validateAgainstLive() verifies every version in one call and throws SchemaDriftError on mismatch — surface this as "upgrade the SDK or the contract", not as a silent null. See atlas verify --testnet.
Status
Live on TON testnet at 0QAWrBmdkBq3ba3I9365hKTTwx22r5OvgIt-YP18Vuv6NL0i. Group key published at epoch 1 (solo-mode t=n=1). Pinned to ForgeTON testnet pool 0QBO0Jw0D2cz5YfXEfVLC-ooCDx2bMK2bUhqBeOlEMyzcT0c. 22/22 verifyDeployment checks pass.
import { assertDeployment, Atlas } from '@titon-network/atlas-sdk';
const dep = assertDeployment('testnet'); // returns the live handle
const atlas = tonClient.open(Atlas.createFromAddress(dep.atlas));Mainnet not yet deployed — ATLAS_MAINNET is null. External crypto audit recommended before mainnet.
Cross-repo workspace links
- Atlas contract source
- On-chain ABI (
messages.tolk) - Error codes (
errors.tolk) - Integration tests — end-to-end with real ForgeTON
titon/forgeton/— the staking pool Atlas mirrors from. DefinesSlash 0x14+AutomatonSync 0x1Awire shapestiton/kronos/— canonical "single-product ForgeTON consumer" referencetiton/fortuna/— VRF; first Atlas verifier consumertiton/automaton/— off-chain operator daemon- Workspace
CLAUDE.md— cross-repo orientation
License
MIT.
