@native-stack/stream-sql
v1.0.0
Published
Zero-dependency Writable stream that pipes data directly into SQLite — powered by Node.js 24 native APIs.
Downloads
91
Maintainers
Readme
@native-stack/stream-sql
Zero-dependency Writable stream that pipes data directly into SQLite.
Built on Node.js 24+ native APIs — no ORMs, no bundlers, no polyfills.
Why StreamSQL?
Loading data into SQLite from Node.js typically means choosing between ORMs with dozens of transitive dependencies, or hand-rolling transaction logic. StreamSQL gives you a native Writable stream that handles batching, transactions, and backpressure — all in a single file with zero dependencies.
| Feature | StreamSQL | Typical Alternative |
|---|---|---|
| Dependencies | 0 | 10–200+ |
| Database | Node.js built-in node:sqlite | External install required |
| Batching | Atomic BEGIN/COMMIT blocks | Manual transaction management |
| Backpressure | Native Writable stream | DIY or library-specific |
| Schema | Auto-inferred from first object | Manual DDL required |
| Performance | 100k rows in < 2s | Varies |
One import. One pipe. Maximum velocity.
Install
npm install @native-stack/stream-sqlRequires Node.js 24 or higher. The
node:sqlitemodule is a built-in available from Node.js 24+.
Quick Start
import { StreamSQL } from '@native-stack/stream-sql';
import { Readable } from 'node:stream';
import { pipeline } from 'node:stream/promises';
// Your data — an array, a generator, a file stream, anything.
const users = Array.from({ length: 50_000 }, (_, i) => ({
id: i + 1,
name: `user_${i + 1}`,
score: Math.random() * 100,
}));
// Create the stream — table is auto-created from the first object's keys.
const stream = new StreamSQL({
dbPath: 'analytics.db',
tableName: 'users',
batchSize: 5000,
});
// Track progress via the 'batch' event.
stream.on('batch', (total) => console.log(`${total} rows committed`));
// Pipe and wait.
await pipeline(Readable.from(users), stream);
// → 5000 rows committed
// → 10000 rows committed
// → ...
// → 50000 rows committedIntegration with @native-stack/nano
The 'batch' event makes it trivial to track pipeline progress with a persistent state machine:
import { StreamSQL } from '@native-stack/stream-sql';
import { Nano } from '@native-stack/nano';
import { Readable } from 'node:stream';
import { pipeline } from 'node:stream/promises';
const tracker = new Nano({ name: 'etl-pipeline' });
const stream = new StreamSQL({
dbPath: 'warehouse.db',
tableName: 'events',
batchSize: 1000,
});
stream.on('batch', (total) => {
tracker.transition('ingesting', { rowsCommitted: total });
});
await pipeline(Readable.from(data), stream);
tracker.transition('complete', { totalRows: stream.totalInserted });API Reference
new StreamSQL(options)
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| dbPath | string | required | Path to the SQLite database file |
| tableName | string | required | Target table name |
| columns | ColumnDefinition[] | auto-inferred | Explicit column definitions for the table |
| batchSize | number | 1000 | Rows to buffer before flushing in a transaction |
ColumnDefinition
{ name: string; type: 'TEXT' | 'INTEGER' | 'REAL' | 'BLOB' | 'ANY' }If columns is omitted, the schema is auto-inferred from the first object:
number(integer) →INTEGERnumber(float) →REAL- everything else →
TEXT
Instance Members
| Member | Type | Description |
|--------|------|-------------|
| .totalInserted | number | Cumulative count of successfully committed rows |
| .close() | void | Closes the database connection (idempotent) |
Events
| Event | Payload | Description |
|-------|---------|-------------|
| 'batch' | number | Emitted after each committed transaction with the cumulative total |
| 'error' | Error | Emitted when a batch insert fails (transaction is rolled back) |
| 'finish' | — | Standard Writable event — all data has been flushed |
Performance
The included benchmark pipes 100,000 objects through StreamSQL into a temp database:
node --testTypical results on an Apple M-series chip:
→ 100k rows inserted in ~800ms (~125,000 rows/sec)The speed comes from three things:
- Synchronous native SQLite — no FFI, no WASM, no IPC overhead
- Prepared statements — the INSERT is compiled once and reused
- Atomic batching — thousands of inserts execute inside a single transaction
Requirements
- Node.js ≥ 24.0.0 — uses the native
node:sqliteandnode:streammodules - Zero runtime dependencies — nothing to install, audit, or update
License
MIT — © 2026 Native Stack
