@synode/adapter-bigquery
v5.0.24
Published
BigQuery adapter for Synode — import datasets and export events
Downloads
2,734
Maintainers
Readme
@synode/adapter-bigquery
BigQuery adapter for Synode. Import datasets from BigQuery tables and export generated events with automatic batching.
Install
npm install @synode/adapter-bigquery @google-cloud/bigqueryRequires @synode/core and @google-cloud/bigquery as peer dependencies.
Export (Writing Events)
import { generate } from '@synode/core';
import { BigQueryAdapter } from '@synode/adapter-bigquery';
const adapter = new BigQueryAdapter({
projectId: 'my-project',
datasetId: 'analytics',
tableId: 'events',
batchSize: 200,
});
await generate(journey, { users: 100, adapter });
await adapter.close();Export Options
| Option | Type | Default | Description |
| ----------------- | ------------------------------------------------------------- | ------- | -------------------------------------- |
| projectId | string | -- | GCP project ID |
| datasetId | string | -- | BigQuery dataset ID |
| tableId | string | -- | BigQuery table ID |
| batchSize | number | 100 | Events to buffer before inserting |
| flushInterval | number | 5000 | Interval (ms) to flush partial batches |
| autoCreateTable | boolean | false | Create the table if it doesn't exist |
| transform | (event: Record<string, unknown>) => Record<string, unknown> | -- | Transform each event row before insert |
Events are serialized as flat rows with id, user_id, session_id, name, timestamp (ISO 8601), and payload (JSON string). Use transform to reshape rows for your schema.
Import (Loading Datasets)
import { generate } from '@synode/core';
import { BigQueryAdapter, importFromBigQuery } from '@synode/adapter-bigquery';
const products = await importFromBigQuery({
projectId: 'my-project',
datasetId: 'ecommerce',
tableId: 'products',
id: 'products',
name: 'Products',
where: 'active = true',
limit: 1000,
});
await generate(journey, {
users: 100,
preloadedDatasets: [products],
adapter: new BigQueryAdapter({ ... }),
});Import Options
| Option | Type | Default | Description |
| ----------- | -------- | --------- | ------------------------------------------ |
| projectId | string | -- | GCP project ID |
| datasetId | string | -- | BigQuery dataset ID |
| tableId | string | -- | BigQuery table ID to read from |
| where | string | -- | SQL WHERE clause to filter rows |
| limit | number | unlimited | Maximum number of rows to import |
| id | string | -- | Synode dataset ID for the imported dataset |
| name | string | -- | Synode dataset name |
Documentation
License
Proprietary -- see LICENSE for details.
Copyright © 2026 Digitl Cloud GmbH
