hydrousdb
v3.5.1
Published
Official JavaScript/TypeScript SDK for HydrousDB — the backend-as-a-service platform for records, auth, storage, and analytics.
Maintainers
Readme
HydrousDB JS/TS SDK
A database that doesn't choke on big JSON. Store, query, and analyse massive records — with auth and file storage built in.
npm install hydrousdb→ Get your free API keys at hydrousdb.com
Table of Contents
- Setup
- Records
- Auth
- File Storage
- Analytics
- Error Handling
- TypeScript
- Security
- Framework Examples
- API Reference
Setup
import { createClient } from 'hydrousdb';
const db = createClient({
authKey: process.env.HYDROUS_AUTH_KEY!, // hk_auth_…
bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!, // hk_bucket_…
storageKeys: {
main: process.env.HYDROUS_STORAGE_KEY!, // ssk_…
// add more named storage keys as needed
},
// baseUrl: 'https://custom-endpoint.example.com', // optional override
});You get three key types from the dashboard — one for auth, one for records/analytics, one (or more) for file storage. Keep them in environment variables, never in your code.
Works in React, Next.js, Vue, React Native, Node.js — anywhere that runs modern JavaScript.
Records
JSON objects stored in named buckets. Every record automatically gets id, createdAt, and updatedAt.
const posts = db.records('blog-posts');Create, Read, Update, Delete
// Create
const post = await posts.create(
{ title: 'Hello', status: 'draft', views: 0 },
{
queryableFields: ['status', 'views'], // declare fields you want to filter on
userEmail: '[email protected]', // optional — for audit trails
},
);
console.log(post.id); // "260601-rec_01JA2XYZ"
console.log(post.createdAt); // Unix ms timestamp
// Read by ID
const found = await posts.get(post.id);
// Partial update — only the fields you pass are changed
await posts.patch(post.id, { status: 'published' });
// Merge mode — deeply merge nested objects instead of replacing them
await posts.patch(post.id, { meta: { seo: true } }, { merge: true });
// Delete permanently
await posts.delete(post.id);Why
queryableFields? Records are stored as compressed blobs. Fields you want to filter or sort by must be declared at write time. You only pay the indexing overhead for what you actually query.
Querying & Filtering
const { records, hasMore, nextCursor } = await posts.query({
filters: [
{ field: 'status', op: '==', value: 'published' },
{ field: 'views', op: '>', value: 100 },
{ field: 'title', op: 'contains', value: 'hello' },
],
orderBy: 'createdAt',
order: 'desc',
limit: 20,
fields: 'id,title,status', // optional — return only these fields
});Supported filter operators: == != > < >= <= contains
Time Scope on Queries
Pass timeScope to restrict records to a specific day, month, or year using the record ID prefix convention. This is the fastest way to scope a query by time — no timestamp arithmetic needed.
| Scope | Format | Example | Matches |
|---|---|---|---|
| Day | _day_YYMMDD | _day_260305 | March 5, 2026 |
| Month | _month_YYMM | _month_2603 | March 2026 |
| Year | _year_YY | _year_26 | All of 2026 |
// All records from March 2026
const { records, hasMore, nextCursor } = await posts.query({
timeScope: '_month_2603',
order: 'desc',
limit: 50,
});
// Paginate through a time-scoped result set
if (hasMore) {
const page2 = await posts.query({
timeScope: '_month_2603',
startAfter: nextCursor,
});
}
// A specific day
const { records: dayRecords } = await posts.query({
timeScope: '_day_260305', // March 5, 2026
order: 'asc',
});
// An entire year
const { records: yearRecords } = await posts.query({
timeScope: '_year_26',
orderBy: 'createdAt',
order: 'asc',
limit: 100,
});
// Fully composable with filters
const { records: published } = await posts.query({
timeScope: '_month_2603',
filters: [{ field: 'status', op: '==', value: 'published' }],
orderBy: 'createdAt',
order: 'desc',
});
// getAll() also respects timeScope
const all = await posts.getAll({ timeScope: '_year_26' });Pagination
query() returns a cursor you can pass straight into the next call.
// Page 1
const page1 = await posts.query({ limit: 20, orderBy: 'createdAt', order: 'desc' });
// Page 2
if (page1.hasMore) {
const page2 = await posts.query({
limit: 20,
orderBy: 'createdAt',
order: 'desc',
startAfter: page1.nextCursor, // cursor-based — no offset drift
});
}
// You can also use startAt / endAt for range-based cursor control
const window = await posts.query({ startAt: cursorA, endAt: cursorB });Atomic Field Updates
Avoid race conditions with server-side sentinels inside patch():
await posts.patch(post.id, {
views: { __op: 'increment', delta: 1 }, // add N
credits: { __op: 'decrement', delta: 5 }, // subtract N
slug: { __op: 'setOnce', value: 'my-post' }, // set only if currently empty
tags: { __op: 'appendUnique', item: 'featured' }, // add to array, no duplicates
tags: { __op: 'removeFromArray', item: 'draft' }, // remove from array
rating: { __op: 'clamp', value: 6, min: 0, max: 5 }, // clamp to range
price: { __op: 'multiplyBy', factor: 1.1 }, // multiply
active: { __op: 'toggleBool' }, // flip boolean
syncedAt: { __op: 'serverTimestamp' }, // set to server time
} as any);Enable audit trails and history with extra patch options:
await posts.patch(
post.id,
{ status: 'published' },
{ userEmail: '[email protected]', trackHistory: true },
);Batch Operations
// Create up to 500 records at once
const { results, errors, successful, failed } = await posts.batchCreate(
[{ title: 'A', status: 'draft' }, { title: 'B', status: 'draft' }],
{ queryableFields: ['title', 'status'], userEmail: '[email protected]' },
);
// Update up to 500 records at once
const { successful, failed } = await posts.batchUpdate(
[
{ recordId: 'id1', values: { status: 'archived' } },
{ recordId: 'id2', values: { status: 'archived' } },
],
'[email protected]', // optional userEmail
);
// Delete up to 500 records at once
const { successful, failed } = await posts.batchDelete(['id1', 'id2']);Batch upsert using custom IDs — include _customRecordId on each item:
await posts.batchCreate([
{ _customRecordId: '260601-post_hello', title: 'Hello' },
{ _customRecordId: '260601-post_world', title: 'World' },
] as any);Version History
Every write is automatically versioned when trackHistory is enabled.
// List all saved versions
const history = await posts.getHistory(post.id);
// → [{ generation, savedAt, savedBy, sizeBytes }, …]
// Retrieve a specific past version
const v1 = await posts.getVersion(post.id, history[0].generation!);Custom Record IDs
Supply your own ID at creation time — if it already exists the record is upserted in-place.
// Format: YYMMDD-segment1__segment2
const post = await posts.create(
{ title: 'Welcome' },
{ customRecordId: '260601-post_welcome' },
);Existence Check
A lightweight HEAD request — much cheaper than fetching the full record:
const exists = await posts.exists(post.id); // true | falseGet All Records
Fetches every record matching the options without filter support. Use query() when you need filters.
const all = await posts.getAll({
orderBy: 'createdAt',
order: 'desc',
limit: 500,
});Auth
A complete user system — signup, login, sessions, password reset, email verification, and admin controls.
const auth = db.auth();Sign Up & Log In
// Sign up — extra fields beyond email/password are stored on the user
const { user, session } = await auth.signup({
email: '[email protected]',
password: 'hunter2',
fullName: 'Alice Smith',
plan: 'pro', // any custom fields you want
});
// Log in
const { user, session } = await auth.login({
email: '[email protected]',
password: 'hunter2',
});
// Log out this device
await auth.logout({ sessionId: session.sessionId });
// Log out everywhere
await auth.logout({ sessionId: session.sessionId, allDevices: true });Store session.sessionId and session.refreshToken in your app.
| Token | Lifetime |
|---|---|
| sessionId | 24 hours |
| refreshToken | 30 days |
Session Management
// Validate a session and get the current user (use on your backend)
const { user, session } = await auth.validateSession(session.sessionId);
// session → { sessionId, expiresAt }
// Get a brand-new session from a refresh token (before the old one expires)
const newSession = await auth.refreshSession(session.refreshToken);User Profile
// Fetch a user by ID
const user = await auth.getUser(session.userId);
// Update profile fields (users can update themselves; admins can update anyone)
await auth.updateUser({
sessionId: session.sessionId,
userId: user.id,
updates: { fullName: 'Alice Johnson', plan: 'enterprise' },
});
// Soft-delete (users can delete themselves; admins can delete anyone)
await auth.deleteUser(session.sessionId, user.id);The UserRecord shape:
interface UserRecord {
id: string;
email: string;
fullName?: string | null;
emailVerified: boolean;
accountStatus: 'active' | 'locked' | 'suspended';
role: 'user' | 'admin';
createdAt: number; // Unix ms
updatedAt: number; // Unix ms
metadata?: Record<string, unknown>;
[key: string]: unknown; // custom fields from signup
}Password & Email
// Change password — requires an active session AND the current password
await auth.changePassword({
sessionId: session.sessionId,
userId: user.id,
currentPassword: 'hunter2',
newPassword: 'correcthorsebatterystaple',
});
// Forgot-password flow
await auth.requestPasswordReset('[email protected]'); // always succeeds (prevents enumeration)
await auth.confirmPasswordReset(tokenFromEmail, 'newpassword123');
// Email verification
await auth.requestEmailVerification(user.id);
await auth.confirmEmailVerification(tokenFromEmail);Admin Controls
All admin methods require an active admin session.
// Paginated user list
const { users, hasMore, nextCursor } = await auth.listUsers({
sessionId: adminSession.sessionId,
limit: 50,
cursor: previousNextCursor, // optional — for subsequent pages
});
// Lock an account (default: 15 minutes)
const { lockedUntil, unlockTime } = await auth.lockAccount({
sessionId: adminSession.sessionId,
userId: user.id,
duration: 60 * 60 * 1000, // optional ms — lock for 1 hour
});
// Unlock an account
await auth.unlockAccount(adminSession.sessionId, user.id);
// Permanent (hard) delete — cannot be undone
await auth.hardDeleteUser(adminSession.sessionId, user.id);
// Bulk delete — soft or hard
const { succeeded, failed } = await auth.bulkDeleteUsers({
sessionId: adminSession.sessionId,
userIds: ['id1', 'id2', 'id3'],
hard: true, // optional — defaults to soft delete
});File Storage
Files are private by default and scoped to your storage key server-side.
const storage = db.storage('main'); // 'main' matches a key in storageKeys configUpload
// Simple upload — anything up to 500 MB
const result = await storage.upload(file, 'avatars/alice.jpg', {
isPublic: true, // public CDN URL (default: false)
overwrite: true, // replace if the path exists (default: false)
mimeType: 'image/jpeg', // optional — auto-detected from content if omitted
expiresInSeconds: 3600, // optional — auto-delete after N seconds
});
console.log(result.publicUrl); // permanent CDN URL (if isPublic: true)
console.log(result.downloadUrl); // authenticated URL (if private)
console.log(result.path);
console.log(result.size);
console.log(result.mimeType);
// Upload a JS object or string directly as a file
await storage.uploadRaw({ theme: 'dark', lang: 'en' }, 'settings/config.json');
await storage.uploadRaw('<html>…</html>', 'pages/home.html', { mimeType: 'text/html' });Large Files with Progress
Recommended for files > 10 MB or when you need a progress indicator (browsers only).
// Step 1 — get a signed GCS PUT URL
const { uploadUrl, path, expiresAt, expiresIn } = await storage.getUploadUrl({
path: 'videos/intro.mp4',
mimeType: 'video/mp4',
size: file.size,
isPublic: true,
overwrite: false,
expiresInSeconds: 3600, // URL lifetime
});
// Step 2 — upload directly to GCS with progress callback (browser XHR)
await storage.uploadToSignedUrl(uploadUrl, file, 'video/mp4', (percent) => {
console.log(`${percent}% uploaded`);
});
// Step 3 — confirm and register metadata server-side
const result = await storage.confirmUpload({
path,
mimeType: 'video/mp4',
isPublic: true,
});Batch Uploads
Get signed URLs for up to 50 files at once, upload them in parallel, then confirm in one call.
// Step 1 — get signed URLs for multiple files
const { files: urls } = await storage.getBatchUploadUrls([
{ path: 'docs/a.pdf', mimeType: 'application/pdf', size: fileA.size, isPublic: false },
{ path: 'docs/b.pdf', mimeType: 'application/pdf', size: fileB.size, isPublic: false },
]);
// Step 2 — upload each file (run in parallel)
await Promise.all(
urls.map(({ uploadUrl, path }, i) =>
storage.uploadToSignedUrl(uploadUrl, files[i], 'application/pdf'),
),
);
// Step 3 — confirm all at once
const { succeeded, failed } = await storage.batchConfirmUploads([
{ path: 'docs/a.pdf', mimeType: 'application/pdf' },
{ path: 'docs/b.pdf', mimeType: 'application/pdf' },
]);Download
// Download a private file as ArrayBuffer
const buffer = await storage.download('private/report.pdf');
// Convert to a Blob for use in the browser
const blob = new Blob([buffer], { type: 'application/pdf' });
const url = URL.createObjectURL(blob);For public files, just use the publicUrl directly — no SDK or authentication needed.
Batch Download
Downloads up to 20 files at once. Content is returned as base64-encoded strings.
const { succeeded, failed } = await storage.batchDownload([
'docs/report.pdf',
'images/chart.png',
]);
for (const file of succeeded) {
console.log(file.path, file.mimeType, file.size);
// file.content is base64 — decode with atob() or Buffer.from(content, 'base64')
}
for (const err of failed) {
console.error(err.path, err.error, err.code);
}List, Metadata & Signed URLs
// List files and folders (paginated)
const { files, folders, hasMore, nextCursor } = await storage.list({
prefix: 'avatars/', // optional path prefix to list under
limit: 50,
cursor: previousNextCursor,
recursive: true, // include files in sub-folders (default: false)
});
// Page 2
if (hasMore) {
const page2 = await storage.list({ prefix: 'avatars/', cursor: nextCursor });
}
// Get file metadata
const meta = await storage.getMetadata('avatars/alice.jpg');
// → { path, size, mimeType, isPublic, publicUrl, downloadUrl, createdAt, updatedAt }
// Generate a time-limited link — anyone with the URL can download (no key needed)
// Note: downloads via signed URL bypass the server, so download stats are NOT tracked
const { signedUrl, expiresAt, expiresIn } = await storage.getSignedUrl(
'private/report.pdf',
3600, // lifetime in seconds (default: 3600)
);Move, Copy & Delete
await storage.move('old/path.jpg', 'new/path.jpg');
await storage.copy('templates/base.html', 'pages/home.html');
await storage.deleteFile('avatars/old.jpg');
await storage.deleteFolder('temp/'); // recursively deletes all contentsVisibility
// Make a private file public
const result = await storage.setVisibility('reports/q1.pdf', true);
console.log(result.publicUrl);
// Make a public file private
await storage.setVisibility('reports/q1.pdf', false);Folders
// Create an explicit folder marker (usually not needed — folders are implicit)
await storage.createFolder('projects/2025/');Scoped Storage
Automatically prefix every path — ideal for per-user file isolation.
const userFiles = db.storage('main').scope(`users/${userId}/`);
await userFiles.upload(pdf, 'contract.pdf'); // → users/{userId}/contract.pdf
await userFiles.uploadRaw({ key: 'val' }, 'prefs.json');
const { files } = await userFiles.list(); // → only lists users/{userId}/
// Nest deeper
const reports = userFiles.scope('reports/'); // → users/{userId}/reports/
await reports.upload(file, 'q1.pdf'); // → users/{userId}/reports/q1.pdf
// All StorageManager methods are available on ScopedStorage
const meta = await userFiles.getMetadata('contract.pdf');
const { signedUrl } = await userFiles.getSignedUrl('contract.pdf', 900);
await userFiles.move('old.pdf', 'new.pdf');
await userFiles.deleteFile('contract.pdf');
await userFiles.deleteFolder('reports/');Storage Stats
// Stats for this storage key
const stats = await storage.getStats();
// → { totalFiles, totalBytes, uploadCount, downloadCount, deleteCount }
// Server info (no auth required)
const info = await storage.info();
// → { ok: true, storageRoot: '…' }Analytics
BigQuery-powered aggregations. No ETL, no pipelines — just query.
const analytics = db.analytics('orders');Date Range (Time Scope)
Almost every analytics method accepts an optional dateRange to restrict results to a time window. Both start and end are Unix timestamps in milliseconds — the server converts them to ISO strings internally. Both fields are optional; omit either for an open-ended range.
interface DateRange {
start?: number; // Unix ms — inclusive lower bound
end?: number; // Unix ms — inclusive upper bound
}Granularity options (for time series methods):
| Value | Buckets results by |
|---|---|
| 'hour' | Each hour |
| 'day' | Each calendar day |
| 'week' | Each week |
| 'month' | Each month |
| 'year' | Each year |
Aggregation options (for numeric field methods):
| Value | Meaning |
|---|---|
| 'sum' | Total |
| 'avg' | Average |
| 'min' | Minimum |
| 'max' | Maximum |
| 'count' | Record count |
Count
// All-time total
const { count } = await analytics.count();
// Within a time window
const { count: lastWeek } = await analytics.count({
dateRange: {
start: Date.now() - 7 * 24 * 60 * 60 * 1000,
end: Date.now(),
},
});Distribution
How many records have each value of a field.
const dist = await analytics.distribution({
field: 'status',
limit: 10,
order: 'desc', // 'asc' | 'desc'
dateRange: { start: new Date('2025-01-01').getTime() },
});
// → [{ value: 'published', count: 320 }, { value: 'draft', count: 80 }, …]Sum
Sum a numeric field, optionally grouped by another field.
// Total revenue
const [{ sum: totalRevenue }] = await analytics.sum({ field: 'amount' });
// Revenue by region, last quarter
const sums = await analytics.sum({
field: 'amount',
groupBy: 'region',
limit: 20,
dateRange: {
start: new Date('2025-01-01').getTime(),
end: new Date('2025-03-31').getTime(),
},
});
// → [{ group: 'Europe', sum: 18200 }, { group: 'Americas', sum: 29400 }, …]Time Series
Count of records over time — useful for activity graphs.
// Daily record counts, last 30 days
const daily = await analytics.timeSeries({
granularity: 'day',
dateRange: {
start: Date.now() - 30 * 24 * 60 * 60 * 1000,
end: Date.now(),
},
});
// → [{ date: '2025-03-01', count: 42 }, { date: '2025-03-02', count: 55 }, …]
// Monthly, all time
const monthly = await analytics.timeSeries({ granularity: 'month' });Field Time Series
Aggregate a numeric field over time — useful for revenue, score, or usage trends.
// Daily sum of revenue, last 90 days
const revTrend = await analytics.fieldTimeSeries({
field: 'amount',
aggregation: 'sum',
granularity: 'day',
dateRange: {
start: Date.now() - 90 * 24 * 60 * 60 * 1000,
end: Date.now(),
},
});
// → [{ date: '2025-01-01', value: 4820.5 }, …]
// Monthly average order value
const avgTrend = await analytics.fieldTimeSeries({
field: 'amount',
aggregation: 'avg',
granularity: 'month',
});Top N
Most frequent values for a field by record count.
// Top 10 countries
const top10 = await analytics.topN({
field: 'countryCode',
n: 10,
labelField: 'countryName', // optional — include a human-readable label alongside the value
order: 'desc',
dateRange: { start: new Date('2025-01-01').getTime() },
});
// → [{ value: 'US', label: 'United States', count: 420 }, …]Stats
Statistical summary for a numeric field.
const priceStats = await analytics.stats({
field: 'price',
dateRange: { start: new Date('2025-01-01').getTime() },
});
// → { min, max, avg, sum, count, stddev }Records via BigQuery
Fetch filtered records through the BigQuery engine instead of Firestore. Useful for large result sets or complex server-side filtering.
const records = await analytics.records<Order>({
filters: [
{ field: 'status', op: '==', value: 'paid' },
{ field: 'amount', op: '>=', value: 100 },
{ field: 'country', op: 'CONTAINS', value: 'US' },
],
selectFields: ['id', 'amount', 'country', 'createdAt'], // optional projection
orderBy: 'createdAt',
order: 'desc',
limit: 1000,
offset: 0,
dateRange: {
start: new Date('2025-01-01').getTime(),
end: new Date('2025-03-31').getTime(),
},
});Analytics filter operators: == != > < >= <= CONTAINS
Multi-Metric
Compute multiple aggregations in a single round-trip — perfect for dashboards.
const dashboard = await analytics.multiMetric({
metrics: [
{ field: 'amount', name: 'totalRevenue', aggregation: 'sum' },
{ field: 'amount', name: 'avgOrder', aggregation: 'avg' },
{ field: 'amount', name: 'maxOrder', aggregation: 'max' },
{ field: 'recordId', name: 'orderCount', aggregation: 'count' },
],
dateRange: { start: Date.now() - 30 * 24 * 60 * 60 * 1000 },
});
// → { totalRevenue: 48200, avgOrder: 96.4, maxOrder: 999, orderCount: 500 }Storage Stats (Analytics)
Record count and byte statistics for the bucket.
const storageInfo = await analytics.storageStats({
dateRange: { start: new Date('2025-01-01').getTime() },
});
// → { totalRecords, totalBytes, avgBytes, minBytes, maxBytes }Cross-Bucket
Compare the same metric across multiple buckets in a single query. Your key must have read access to all listed buckets. System buckets are blocked.
const compare = await analytics.crossBucket({
bucketKeys: ['orders-2024', 'orders-2025'],
field: 'amount',
aggregation: 'sum',
dateRange: { start: new Date('2025-01-01').getTime() },
});
// → [{ bucket: 'orders-2024', value: 38200 }, { bucket: 'orders-2025', value: 52100 }]Raw Query
Escape hatch for query types not covered by the typed helpers.
import type { AnalyticsQuery, AnalyticsResult } from 'hydrousdb';
const result = await analytics.query<MyResultType>({
queryType: 'distribution',
field: 'category',
granularity: 'month',
filters: [{ field: 'active', op: '==', value: true }],
dateRange: { start: Date.now() - 30 * 24 * 60 * 60 * 1000 },
limit: 50,
order: 'desc',
});Error Handling
import {
HydrousError,
AuthError,
RecordError,
StorageError,
AnalyticsError,
ValidationError,
NetworkError,
} from 'hydrousdb';
try {
await db.auth().login({ email: '[email protected]', password: 'wrong' });
} catch (err) {
if (err instanceof AuthError) {
console.log(err.code); // 'INVALID_CREDENTIALS'
console.log(err.status); // 401
console.log(err.message); // human-readable
console.log(err.requestId); // for support
}
if (err instanceof ValidationError) {
console.log(err.details); // string[] of specific validation failures
}
if (err instanceof NetworkError) {
console.log('No internet or server unreachable');
console.log(err.cause); // original error
}
}| Error class | When it's thrown |
|---|---|
| HydrousError | Base class — all SDK errors extend this. Has code, status, requestId, details. |
| AuthError | Login failures, invalid/expired sessions, permission denied |
| RecordError | Record not found, write validation failures |
| StorageError | Upload/download failures, file not found |
| AnalyticsError | Invalid query, bucket not found |
| ValidationError | Bad input caught client-side before the request is sent. Has details: string[]. |
| NetworkError | No network, server unreachable, request timed out. Has cause. |
TypeScript
The SDK is written in TypeScript. Type your records for full autocomplete and safety:
interface Order {
customerId: string;
amount: number;
status: 'pending' | 'paid' | 'refunded';
items: { sku: string; qty: number }[];
}
const orders = db.records<Order>('orders');
const order = await orders.create({
customerId: 'cust_123',
amount: 49.99,
status: 'pending',
items: [{ sku: 'SHOE-42', qty: 1 }],
});
// order.status → 'pending' | 'paid' | 'refunded' ✓
// order.id, order.createdAt, order.updatedAt are automatically added ✓
const result = await orders.query({
filters: [{ field: 'status', op: '==', value: 'paid' }],
});
// result.records → (Order & RecordResult)[] ✓Key exported types:
import type {
HydrousConfig,
RecordData, RecordResult, QueryOptions, QueryFilter, QueryResult,
DateRange, Granularity, Aggregation, SortOrder,
AnalyticsQuery, AnalyticsResult, AnalyticsFilter,
UserRecord, AuthResult, Session,
UploadOptions, UploadResult,
ListOptions, ListResult, FileMetadata, SignedUrlResult,
StorageStats,
} from 'hydrousdb';Security
- Never commit API keys. Use environment variables (
process.env.…). - Never expose keys in the browser. For client-side apps, proxy all SDK calls through your own backend.
- Keys travel in headers only — the SDK enforces this. They never appear in URLs, query strings, access logs, or browser history.
- Files are private by default.
isPublicdefaults tofalse. UsegetSignedUrl()for temporary external sharing. - Use scoped storage (
storage.scope('prefix/')) to isolate files per user and prevent path-traversal bugs.
Framework Examples
Next.js (App Router)
// lib/db.ts
import { createClient } from 'hydrousdb';
export const db = createClient({
authKey: process.env.HYDROUS_AUTH_KEY!,
bucketSecurityKey: process.env.HYDROUS_BUCKET_KEY!,
storageKeys: { main: process.env.HYDROUS_STORAGE_KEY! },
});
// app/api/posts/route.ts
import { db } from '@/lib/db';
export async function GET() {
const { records } = await db.records('posts').query({
filters: [{ field: 'status', op: '==', value: 'published' }],
orderBy: 'createdAt',
order: 'desc',
limit: 10,
});
return Response.json(records);
}React (client-side via your own API)
// Always go through your backend — never put keys in React components
const res = await fetch('/api/posts');
const data = await res.json();Vue / Nuxt
// composables/useDb.ts
import { createClient } from 'hydrousdb';
export const db = createClient({
authKey: import.meta.env.VITE_HYDROUS_AUTH_KEY,
bucketSecurityKey: import.meta.env.VITE_HYDROUS_BUCKET_KEY,
storageKeys: { main: import.meta.env.VITE_HYDROUS_STORAGE_KEY },
});React Native
import { createClient } from 'hydrousdb';
// Works out of the box — uses the global fetch available in React Native
const db = createClient({
authKey: HYDROUS_AUTH_KEY,
bucketSecurityKey: HYDROUS_BUCKET_KEY,
storageKeys: { main: HYDROUS_STORAGE_KEY },
});API Reference
createClient(config)
| Field | Type | Required | Description |
|---|---|---|---|
| authKey | string | ✓ | hk_auth_… — for auth routes |
| bucketSecurityKey | string | ✓ | hk_bucket_… — for records & analytics |
| storageKeys | { [name]: string } | ✓ | One or more ssk_… keys for file storage |
| baseUrl | string | — | Override the API endpoint (no trailing slash) |
db.records<T>(bucket) — all methods
| Method | Returns | Description |
|---|---|---|
| create(data, opts?) | T & RecordResult | Create a record. opts.queryableFields enables filtering. opts.customRecordId enables upsert. |
| get(id) | T & RecordResult | Fetch a record by ID. |
| patch(id, data, opts?) | { id, updatedAt? } | Partial update. opts.merge for deep merge. opts.trackHistory to save a version. |
| delete(id) | void | Permanently delete a record. |
| exists(id) | boolean | Lightweight existence check (HEAD request). |
| query(opts?) | QueryResult<T> | Filter, sort, paginate. Supports dateRange. |
| getAll(opts?) | (T & RecordResult)[] | Fetch all records (no filter support — use query for filters). |
| batchCreate(items, opts?) | { results, errors, successful, failed } | Up to 500 records at once. |
| batchUpdate(updates, userEmail?) | { successful, failed } | Up to 500 records at once. |
| batchDelete(ids, userEmail?) | { successful, failed } | Up to 500 records at once. |
| getHistory(id) | RecordHistoryEntry[] | List all saved versions. |
| getVersion(id, generation) | T & RecordResult | Fetch a specific past version. |
QueryOptions fields:
| Field | Type | Description |
|---|---|---|
| filters | QueryFilter[] | Array of { field, op, value } |
| fields | string | Comma-separated list of fields to return |
| orderBy | string | Field to sort by |
| order | 'asc' \| 'desc' | Sort direction |
| limit | number | Max records to return |
| offset | number | Skip N records |
| startAfter | string | Cursor from nextCursor for next-page pagination |
| startAt | string | Cursor — include the record at this cursor |
| endAt | string | Cursor — stop at this cursor |
| dateRange | DateRange | { start?, end? } in Unix ms |
| timeScope | string | Prefix-based time filter: _day_YYMMDD, _month_YYMM, or _year_YY |
db.auth() — all methods
| Method | Description |
|---|---|
| signup(opts) | Register + create session. Extra fields on opts are stored on the user. |
| login(opts) | Authenticate + create session. |
| logout({ sessionId, allDevices? }) | Revoke one session or all sessions. |
| validateSession(sessionId) | Check if a session is active; returns current user. |
| refreshSession(refreshToken) | Get a new session from a refresh token. |
| getUser(userId) | Fetch a user by ID. |
| updateUser(opts) | Update profile fields. |
| changePassword(opts) | Authenticated password change (requires current password). |
| requestPasswordReset(email) | Send reset email (always succeeds to prevent enumeration). |
| confirmPasswordReset(token, newPw) | Apply new password from reset token. |
| requestEmailVerification(userId) | Send verification email. |
| confirmEmailVerification(token) | Mark email verified from token. |
| getUser(userId) | Fetch a user by ID. |
| listUsers(opts) | Paginated user list. Admin only. |
| lockAccount(opts) | Lock a user account. Admin only. |
| unlockAccount(sessionId, userId) | Unlock a user account. Admin only. |
| hardDeleteUser(sessionId, userId) | Permanent delete. Admin only. |
| bulkDeleteUsers(opts) | Delete many users at once (soft or hard). Admin only. |
db.storage(keyName) — all methods
| Method | Description |
|---|---|
| upload(data, path, opts?) | Server-buffered upload (up to 500 MB). |
| uploadRaw(data, path, opts?) | Upload a JS object or string as a file. |
| getUploadUrl(opts) | Step 1 of signed-URL upload — get a GCS PUT URL. |
| uploadToSignedUrl(url, data, mime, onProgress?) | Step 2 — upload directly to GCS (supports progress in browsers). |
| confirmUpload(opts) | Step 3 — register metadata server-side. |
| getBatchUploadUrls(files) | Get signed URLs for up to 50 files at once. |
| batchConfirmUploads(items) | Confirm multiple direct uploads at once. |
| download(path) | Download a private file as ArrayBuffer. |
| batchDownload(paths, concurrency?) | Download up to 20 files at once (base64 content). |
| list(opts?) | List files and folders. Supports prefix, limit, cursor, recursive. |
| getMetadata(path) | File size, MIME type, visibility, URLs. |
| getSignedUrl(path, expiresIn?) | Time-limited share link (default 3600 s). |
| setVisibility(path, isPublic) | Toggle a file between public and private. |
| createFolder(path) | Create an explicit folder marker. |
| move(from, to) | Move (rename) a file. |
| copy(from, to) | Copy a file to a new path. |
| deleteFile(path) | Permanently delete a file. |
| deleteFolder(path) | Recursively delete a folder and all its contents. |
| getStats() | Upload/download/delete counts and total bytes. |
| info() | Server info — no auth required. |
| scope(prefix) | Get a ScopedStorage that auto-prefixes all paths. |
ScopedStorage exposes every method above (except info) plus scope(subPrefix) for nesting deeper.
db.analytics(bucket) — all methods
| Method | Returns | Description |
|---|---|---|
| count(opts?) | { count } | Total record count, optionally within a dateRange. |
| distribution(opts) | DistributionRow[] | Per-value counts for a field. |
| sum(opts) | SumRow[] | Sum a numeric field, optional groupBy. |
| timeSeries(opts?) | TimeSeriesRow[] | Record counts bucketed by granularity. |
| fieldTimeSeries(opts) | FieldTimeSeriesRow[] | Numeric field aggregated over time. |
| topN(opts) | TopNRow[] | Most frequent field values. |
| stats(opts) | FieldStats | min / max / avg / sum / count / stddev. |
| records(opts?) | (T & RecordResult)[] | Filtered records via BigQuery (good for large sets). |
| multiMetric(opts) | MultiMetricResult | Multiple aggregations in one request. |
| storageStats(opts?) | StorageStatsResult | Record count and byte stats for the bucket. |
| crossBucket(opts) | CrossBucketRow[] | Compare a metric across multiple buckets. |
| query(query) | AnalyticsResult<T> | Raw escape hatch for any AnalyticsQuery. |
Contributing
git clone https://github.com/hydrousdb/hydrousdb-js.git
cd hydrousdb-js
npm install
npm test # run tests
npm run build # compileLicense
MIT — LICENSE
