@alteran/astro
v0.3.9
Published
Astro integration for running a Cloudflare-hosted Bluesky PDS with Alteran.
Readme
Alteran
Astro Integration
This repository now ships an Astro integration that turns any Cloudflare Worker-backed Astro app into a single-user ATProto Personal Data Server. Install the package (or link it locally), then add the integration to your astro.config.mjs:
npm install @alteran/astro
# or
bun add @alteran/astroimport { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';
import alteran from '@alteran/astro';
export default defineConfig({
adapter: cloudflare({ mode: 'advanced' }),
integrations: [alteran()],
});By default the integration injects all /xrpc/* ATProto routes, health/ready checks, and the Cloudflare Worker entrypoint that wires locals.runtime. Optional flags let you expose the /debug/* utilities or keep your own homepage:
alteran({
debugRoutes: process.env.NODE_ENV !== 'production',
includeRootEndpoint: false,
injectServerEntry: true, // opt in if you don't maintain your own worker entrypoint
});The integration automatically:
- Resolves all injected routes against the packaged runtime without requiring a Vite alias
- Registers the middleware that applies structured logging and CORS enforcement
- Injects all PDS HTTP endpoints into the host project
- Offers the packaged Cloudflare worker entrypoint when you enable
{ injectServerEntry: true } - Publishes ambient env typings so
EnvandApp.Localsare available from TypeScript
When deploying, continue to configure Wrangler/D1/R2 secrets exactly as before—the integration does not change the runtime requirements.
Custom Worker Entrypoint
The integration no longer overrides build.serverEntry by default. If you need to export additional Durable Objects or otherwise customise the worker, keep your own entrypoint and compose Alteran's runtime helpers instead of copying the internal logic.
// src/_worker.ts in your Astro project
import { createPdsFetchHandler, Sequencer } from '@alteran/astro/worker';
const fetch = createPdsFetchHandler();
export default { fetch };
// Re-export Sequencer so Wrangler can bind the Durable Object namespace
export { Sequencer };
// Export any additional Durable Objects after this line
export { MyDurableObject } from './worker/my-durable-object';Helpers like onRequest, seed, and validateConfigOrThrow are also exported from @alteran/astro/worker if you need to build more advanced wrappers (for example, to add request instrumentation before delegating to the PDS handler).
To install dependencies:
bun installDev server (Vite dev):
bun run devCloudflare local dev (optional):
bunx wrangler dev --localBuild and deploy:
bun run build
bun run deployHealth endpoints: GET /health and GET /ready return 200 ok.
Auth (JWT)
POST /xrpc/com.atproto.server.createSessionreturnsaccessJwtandrefreshJwt(HS256).POST /xrpc/com.atproto.server.refreshSessionexpectsAuthorization: Bearer <refreshJwt>and issues a new pair.- Use
Authorization: Bearer <accessJwt>on write routes. - Secrets to set (Wrangler secrets or local bindings):
USER_PASSWORD(dev login password)ACCESS_TOKEN,REFRESH_TOKEN(HMAC keys)PDS_DID,PDS_HANDLE
Rate limiting & limits
- Per‑IP rate limit (best‑effort, D1‑backed): set
PDS_RATE_LIMIT_PER_MIN(default writes=60/min, blobs=30/min). Responses includex-ratelimit-*headers. - JSON body size cap via
PDS_MAX_JSON_BYTES(default 65536/64 KiB). - CORS: allow
*by default in dev. In production, setPDS_CORS_ORIGINto a CSV of allowed origins (e.g.,https://example.com,https://app.example.com). Requests with anOriginnot in this set are denied at the CORS layer (no wildcard fallback).
This project was created using bun init in bun v1.2.22 and configured for Cloudflare Workers with Vite and @cloudflare/vite-plugin.
Database Migrations
This project uses Drizzle Kit for database schema management and migrations.
Migration Workflow
- Modify Schema: Edit
src/db/schema.tsto add/modify tables or indexes - Generate Migration: Run
bun run db:generateto create a new migration file indrizzle/ - Review Migration: Check the generated SQL in
drizzle/XXXX_*.sql - Apply Locally: Run
bun run db:apply:localto apply to local D1 database - Apply to Production: Run
wrangler d1 migrations apply pds --remoteafter deployment
Migration Versioning
- Migrations are versioned sequentially (0000, 0001, 0002, etc.)
- Each migration is tracked in
drizzle/meta/_journal.json - Migrations are applied in order and cannot be skipped
- Applied migrations are recorded in D1's
_cf_KVtable
Rollback Procedures
Important: D1 does not support automatic rollbacks. To rollback:
- Create a new migration that reverses the changes
- Test thoroughly in local/staging environment
- Apply the rollback migration to production
Example rollback migration:
-- Rollback: Remove index added in 0002
DROP INDEX IF EXISTS `record_cid_idx`;Data Retention & Pruning
Commit Log: Stores full commit history for firehose and sync
- Default retention: Last 10,000 commits
- Pruning: Use
pruneOldCommits()utility - Older commits can be safely removed as current state is in MST
Blockstore: Stores MST nodes (Merkle Search Tree blocks)
- Retention: Blocks referenced by recent commits
- GC: Use
pruneOrphanedBlocks()utility - Orphaned blocks (not in recent commits) can be removed
Token Revocation: Stores revoked JWT tokens
- Automatic cleanup: Expired tokens removed lazily (1% of requests)
- Manual cleanup: Use token cleanup utility
Configuration Management
Environment Setup
This PDS supports multiple environments (dev, staging, production) with separate configurations.
Deploy to specific environment:
# Development
wrangler deploy --env dev
# Staging
wrangler deploy --env staging
# Production
wrangler deploy --env productionRequired Secrets
Set these secrets for each environment using wrangler secret put <NAME> --env <environment>:
| Secret | Description | Example |
|--------|-------------|---------|
| PDS_DID | Your DID identifier | did:plc:abc123 or did:web:example.com |
| PDS_HANDLE | Your handle | user.bsky.social |
| USER_PASSWORD | Login password | Strong password |
| ACCESS_TOKEN | JWT access token secret | Random 32+ char string |
| REFRESH_TOKEN | JWT refresh token secret | Random 32+ char string |
| REPO_SIGNING_KEY | Ed25519 signing key (base64) | From generate-signing-key.ts |
Generate secrets:
# One-shot bootstrap (recommended)
# Generates all required secrets and prints wrangler commands
bun run scripts/setup-secrets.ts --env production --did did:web:example.com --handle user.example.com
# Or generate only the repo signing key
bun run scripts/generate-signing-key.ts
# After generation, set secrets (example for production)
wrangler secret put PDS_DID --env production
wrangler secret put PDS_HANDLE --env production
wrangler secret put USER_PASSWORD --env production
wrangler secret put ACCESS_TOKEN --env production
wrangler secret put REFRESH_TOKEN --env production
wrangler secret put REPO_SIGNING_KEY --env production
# Optional: publish public key for DID document
wrangler secret put REPO_SIGNING_KEY_PUBLIC --env productionUsing Cloudflare Secret Store (optional)
Instead of Wrangler Secrets, you may bind secrets from Cloudflare Secret Store. This repo now supports both. Bind each secret you want to source from Secret Store via secrets_store_secrets in wrangler.jsonc:
{
// ...
"secrets_store_secrets": [
{ "binding": "USER_PASSWORD", "secret_name": "user_password", "store_id": "<your-store-id>" },
{ "binding": "ACCESS_TOKEN", "secret_name": "access_token", "store_id": "<your-store-id>" },
{ "binding": "REFRESH_TOKEN", "secret_name": "refresh_token", "store_id": "<your-store-id>" },
{ "binding": "PDS_DID", "secret_name": "pds_did", "store_id": "<your-store-id>" },
{ "binding": "PDS_HANDLE", "secret_name": "pds_handle", "store_id": "<your-store-id>" }
]
}Notes:
- Bindings can use the same names as the existing env vars. Only one source should be configured per secret (Wrangler Secret OR Secret Store binding).
- At runtime, the worker resolves Secret Store bindings via
await env.<BINDING>.get()and passes them to the app as plain strings.
Optional Configuration
These can be set as environment variables in wrangler.jsonc or as secrets:
| Variable | Default | Description |
|----------|---------|-------------|
| PDS_ALLOWED_MIME | image/jpeg,image/png,... | Comma-separated MIME types |
| PDS_MAX_BLOB_SIZE | 5242880 (5MB) | Max blob size in bytes |
| PDS_MAX_JSON_BYTES | 65536 (64KB) | Max JSON body size |
| PDS_RATE_LIMIT_PER_MIN | 60 | Write requests per minute |
| PDS_CORS_ORIGIN | * (dev), specific (prod) | Allowed CORS origins |
| PDS_SEQ_WINDOW | 512 | Firehose sequence window |
| PDS_HOSTNAME | - | Public hostname |
| PDS_ACCESS_TTL_SEC | 3600 (1 hour) | Access token TTL |
| PDS_REFRESH_TTL_SEC | 2592000 (30 days) | Refresh token TTL |
Configuration Validation
The PDS validates configuration on startup and will fail fast if required secrets are missing:
// Automatic validation in src/_worker.ts
validateConfigOrThrow(env);Validation checks:
- All required secrets are present
- CORS is not wildcard in production
- DID format is valid
- Handle format is valid
- Numeric values are positive
Environment-Specific Settings
See wrangler.jsonc for environment-specific configurations:
- Development: Relaxed CORS, larger blob limits, local D1/R2
- Staging: Production-like settings, separate D1/R2 instances
- Production: Strict CORS, production D1/R2, observability enabled
Debugging & storage
- D1 schema/migrations: generated with Drizzle Kit into
drizzle/. Generate withbunx drizzle-kit generate. - Apply schema locally:
bunx wrangler d1 migrations apply pds --local(requires dev DB namedpds). - Bootstrap route (alt):
POST /debug/db/bootstrapcreates a minimalrecordtable. - Insert a test record:
POST /debug/recordwith{ "uri": "at://did:example/app.bsky.feed.post/123", "json": {"msg":"hi"} }. - Get a record:
GET /debug/record?uri=at://did:example/app.bsky.feed.post/123. - R2 test:
PUT /debug/blob/<key>andGET /debug/blob/<key>. - Run GC:
POST /debug/gc/blobsremoves R2 objects with no references
XRPC surface
GET /xrpc/com.atproto.server.describeServerPOST /xrpc/com.atproto.server.createSession(returns JWTs)POST /xrpc/com.atproto.server.refreshSessionGET /xrpc/com.atproto.repo.getRecord?uri=...(reads from D1recordtable) orrepo+collection+rkeyPOST /xrpc/com.atproto.repo.createRecord(auth required)POST /xrpc/com.atproto.repo.putRecord(auth required)POST /xrpc/com.atproto.repo.deleteRecord(auth required)POST /xrpc/com.atproto.repo.uploadBlob(auth + MIME allowlist)- Stores blob metadata in
blobtable (cid=sha256 b64url,mime,size) - Blob references inside records tracked by R2 key; deleting a record drops usage and GC can reclaim orphaned objects
- Stores blob metadata in
Sync (minimal JSON variants)
GET /xrpc/com.atproto.sync.getHead→{ root, rev }GET /xrpc/com.atproto.sync.getRepo.json?did=<did>→{ did, head, rev, records: [{uri,cid,value}] }GET /xrpc/com.atproto.sync.getCheckout.json?did=<did>→ same as aboveGET /xrpc/com.atproto.sync.getBlocks.json?cids=<cid1,cid2>→{ blocks: [{cid,value}] }
Sync (CAR v1)
GET /xrpc/com.atproto.sync.getRepo?did=<did>→application/vnd.ipld.carsnapshotGET /xrpc/com.atproto.sync.getCheckout?did=<did>→ same as aboveGET /xrpc/com.atproto.sync.getBlocks?cids=<cid1,cid2>→application/vnd.ipld.carwith requested blocks- Blocks are DAG-CBOR encoded; CIDs are CIDv1 (dag-cbor + sha2-256)
Firehose (WebSocket)
GET /xrpc/com.atproto.sync.subscribeReposupgrades to WebSocket.- On writes, the worker POSTs a small commit frame to the
SequencerDurable Object, which broadcasts to all subscribers. - Frames (subject to change):
{"type":"hello","now":<ms>}once on connect{"type":"commit","did":"...","commitCid":"...","rev":<n>,"ts":<ms>}on each write
Blob storage
- Keys are content-addressed:
blobs/by-cid/<sha256-b64url>; upload response$linkequals this key. - Allowed MIME types via
PDS_ALLOWED_MIME(CSV). Size limit viaPDS_MAX_BLOB_SIZE(bytes).
Secrets & config (Wrangler)
- Required:
PDS_DID,PDS_HANDLE,USER_PASSWORDACCESS_TOKEN,REFRESH_TOKEN
- Optional:
PDS_ALLOWED_MIME,PDS_MAX_BLOB_SIZE,PDS_MAX_JSON_BYTES,PDS_RATE_LIMIT_PER_MIN,PDS_CORS_ORIGIN
- Durable Objects: ensure binding for
Sequencerexists and migration tag added (seewrangler.jsonc).
Identity (DID)
- This single‑user PDS uses
did:web. - Host
/.well-known/atproto-didon your production domain with the DID value. - Set
PDS_DIDandPDS_HANDLEsecrets to match your deployment.
P0 Implementation - Core Protocol Compliance ✅
This PDS now implements full AT Protocol core compliance with:
MST (Merkle Search Tree)
- ✅ Sorted, deterministic tree structure
- ✅ Automatic rebalancing (~4 fanout)
- ✅ Prefix compression for efficiency
- ✅ D1 blockstore integration
Signed Commits
- ✅ Ed25519 cryptographic signatures
- ✅ AT Protocol v3 commit structure
- ✅ TID-based revisions
- ✅ Commit chain tracking
Firehose
- ✅ WebSocket-based event stream
- ✅ CBOR-encoded frames (#info, #commit, #identity, #account)
- ✅ Cursor-based replay
- ✅ Backpressure handling
- ✅ Durable Object coordination
XRPC Endpoints
- ✅ Server: getSession, deleteSession
- ✅ Repo: listRecords, describeRepo, applyWrites
- ✅ Sync: listBlobs, getRecord, listRepos, getLatestCommit
- ✅ Identity: resolveHandle, updateHandle
Setup Instructions
1. Generate Secrets
# Recommended: bootstrap all secrets (prints wrangler commands)
bun run scripts/setup-secrets.ts --env production --did did:web:example.com --handle user.example.com
# Alternative: generate only the repo signing key
bun run scripts/generate-signing-key.ts2. Configure Secrets
Required Secrets:
wrangler secret put REPO_SIGNING_KEY # From step 1
wrangler secret put PDS_DID # Your DID
wrangler secret put PDS_HANDLE # Your handle
wrangler secret put USER_PASSWORD # Login password
wrangler secret put REFRESH_TOKEN
wrangler secret put REFRESH_TOKEN_SECRET
# Optional: publish raw public key for DID document
wrangler secret put REPO_SIGNING_KEY_PUBLICFor Local Development (.dev.vars):
PDS_DID=did:plc:your-did-here
PDS_HANDLE=your-handle.bsky.social
REPO_SIGNING_KEY=<base64-key-from-step-1>
# Optional: publish raw 32-byte public key in did.json
REPO_SIGNING_KEY_PUBLIC=<base64-raw-public-key>
USER_PASSWORD=your-password
REFRESH_TOKEN=your-access-secret
REFRESH_TOKEN_SECRET=your-refresh-secret
PDS_SEQ_WINDOW=5123. Run Database Migration
bun run db:generate
bun run db:apply:local4. Run Tests
bun test tests/mst.test.ts
bun test tests/commit.test.ts5. Start Development
bun run devTesting the Implementation
Test Firehose
npm install -g wscat
wscat -c "ws://localhost:4321/xrpc/com.atproto.sync.subscribeRepos"Publish to a Relay (Bluesky)
Relays discover PDSes via com.atproto.sync.requestCrawl. Your deployment will automatically notify relays on the first request it handles (and at most every 12 hours per isolate).
- Set your public hostname (bare domain, no protocol):
PDS_HOSTNAME=your-pds.example.com- Optional: choose relays to notify (CSV of hostnames). Defaults to
bsky.network.
PDS_RELAY_HOSTS=bsky.network- To trigger manually from your machine:
curl -X POST "https://bsky.network/xrpc/com.atproto.sync.requestCrawl" \
-H "Content-Type: application/json" \
-d '{"hostname":"your-pds.example.com"}'Notes:
- Use only the hostname in
hostname(nohttps://). - Ensure your PDS is publicly reachable over HTTPS/WSS and that DID documents resolve to this hostname.
Test XRPC Endpoints
# Get session
curl http://localhost:4321/xrpc/com.atproto.server.getSession
# Describe repo
curl "http://localhost:4321/xrpc/com.atproto.repo.describeRepo?repo=did:example:single-user"
# List records
curl "http://localhost:4321/xrpc/com.atproto.repo.listRecords?repo=did:example:single-user&collection=app.bsky.feed.post"Documentation
P0_COMPLETE.md- Full P0 implementation detailsP0_IMPLEMENTATION_SUMMARY.md- Technical summaryPROGRESS.md- Development progress notes
Repo signing key (REQUIRED)
- Generate an Ed25519 signing key:
bun run scripts/generate-signing-key.ts - Store as
REPO_SIGNING_KEYsecret (base64-encoded private key)
P1 Implementation - Production Readiness 🚀
This PDS now includes production-grade features for security, observability, and reliability:
Authentication Hardening
- ✅ Single-use refresh tokens with JTI tracking
- ✅ Token rotation on every refresh
- ✅ Automatic token cleanup (lazy cleanup on 1% of requests)
- ✅ Account lockout after 5 failed login attempts (15-minute lockout)
- ✅ EdDSA (Ed25519) JWT signing support (in addition to HS256)
- ✅ Proper JWT claims:
sub,aud,iat,exp,jti,scope - ✅ Production CORS validation (no wildcard in production)
Error Handling
- ✅ XRPC error hierarchy with AT Protocol error codes
- ✅ Consistent error responses with user-friendly messages
- ✅ Error categorization (client vs server errors)
- ✅ Request ID tracking in all error responses
Observability
- ✅ Structured JSON logging with levels (debug, info, warn, error)
- ✅ Request ID tracking in all logs and response headers
- ✅ Enhanced health checks for D1 and R2 dependencies
- ✅ Performance metrics in request logs (duration, status)
Additional Configuration
JWT Configuration:
# Algorithm selection (HS256 or EdDSA)
PDS_HOSTNAME=your-pds.example.com
PDS_ACCESS_TTL_SEC=3600 # 1 hour
PDS_REFRESH_TTL_SEC=2592000 # 30 days
JWT_ALGORITHM=HS256 # or EdDSA
# For EdDSA (optional)
JWT_ED25519_PRIVATE_KEY=<base64-encoded-key>
JWT_ED25519_PUBLIC_KEY=<base64-encoded-key>CORS Configuration:
# Comma-separated list of allowed origins (no wildcard in production)
PDS_CORS_ORIGIN=https://app.example.com,https://admin.example.comLogging & Monitoring
View logs in development:
wrangler tail --format=prettyView logs in production:
wrangler tail --env production --format=jsonConfigure Logpush (production):
- Set up Logpush in Cloudflare dashboard
- Send logs to your preferred service (Datadog, Splunk, S3, etc.)
- Filter by
requestIdfor request tracing
Log format:
{
"level": "info",
"type": "request",
"requestId": "uuid",
"method": "POST",
"path": "/xrpc/com.atproto.repo.createRecord",
"status": 200,
"duration": 45,
"timestamp": "2025-10-02T22:00:00.000Z"
}Health Check
Endpoint: GET /health
Response:
{
"status": "healthy",
"timestamp": "2025-10-02T22:00:00.000Z",
"checks": {
"database": { "status": "ok" },
"storage": { "status": "ok" }
}
}Returns 503 if any dependency is unhealthy.
Security Best Practices
- Never use wildcard CORS in production - Set explicit origins in
PDS_CORS_ORIGIN - Use strong secrets - Generate cryptographically secure values for all secrets
- Enable EdDSA signing - More secure than HS256 for production
- Monitor failed login attempts - Check logs for suspicious activity
- Set appropriate token TTLs - Balance security and user experience
Documentation
P1_IMPLEMENTATION_SUMMARY.md- Full P1 implementation detailsP1.md- P1 task breakdown and requirements
P3 Implementation - Optimization & Interoperability 🚀
This PDS now includes optimization for Cloudflare Workers and interoperability features:
Cloudflare Workers Optimization
- ✅ Streaming CAR encoding for memory efficiency (< 128MB)
- ✅ Edge caching for DID documents and static assets
- ✅ Performance tests verifying CPU and memory constraints
- ✅ Memory-efficient operations for large repositories
Blob Storage Enhancement
- ✅ Blob quota tracking per DID (default: 10GB)
- ✅ Quota enforcement on upload
- ✅ Reference counting for garbage collection
- ✅ Deduplication by content-addressed storage
Identity Enhancement
- ✅ DID document generation at
/.well-known/did.json - ✅ Handle validation and normalization
- ✅ Service endpoints in DID document
- ✅ Edge caching for identity documents
Interoperability Testing
- ✅ Federation test stubs for PDS-to-PDS sync
- ✅ Compliance test stubs for AT Protocol
- ✅ Protocol version documentation
- ✅ Lexicon validation framework
Configuration
Blob Quota:
PDS_BLOB_QUOTA_BYTES=10737418240 # Default: 10GBCaching (automatic):
- DID documents: 1 hour TTL, 24 hour stale-while-revalidate
- Records: 1 minute TTL, 5 minute stale-while-revalidate
- Repo snapshots: 5 minute TTL, 1 hour stale-while-revalidate
Testing
# Performance tests
bun test tests/performance.test.ts
# Memory tests
bun test tests/memory.test.ts
# Blob tests
bun test tests/blob.test.ts
# Identity tests
bun test tests/identity.test.ts
# Federation tests
bun test tests/federation.test.ts
# Compliance tests
bun test tests/compliance.test.tsDocumentation
P3_IMPLEMENTATION_SUMMARY.md- Full P3 implementation detailsP3.md- P3 task breakdown and requirementsUsed for signing all repository commits
