node-es-transformer
v1.2.2
Published
Stream-based library for ingesting and transforming large data files (CSV/JSON) into Elasticsearch indices. Supports ES 8.x and 9.x with cross-version reindexing.
Maintainers
Readme
node-es-transformer
Stream-based library for ingesting and transforming large data files (NDJSON/CSV/Parquet/Arrow IPC) into Elasticsearch indices.
Quick Start
npm install node-es-transformerconst transformer = require('node-es-transformer');
// Ingest a large JSON file
await transformer({
fileName: 'data.json',
targetIndexName: 'my-index',
mappings: {
properties: {
'@timestamp': { type: 'date' },
'message': { type: 'text' }
}
}
});See Usage for more examples.
Why Use This?
If you need to ingest large NDJSON/CSV/Parquet/Arrow IPC files (GigaBytes) into Elasticsearch without running out of memory, this is the tool for you. Other solutions often run out of JS heap, hammer ES with too many requests, time out, or try to do everything in a single bulk request.
When to use this:
- Large file ingestion (20-30 GB tested)
- Custom JavaScript transformations
- Cross-version migration (ES 8.x → 9.x)
- Developer-friendly Node.js workflow
When to use alternatives:
- Logstash - Enterprise ingestion pipelines
- Filebeat - Log file shipping
- Elastic Agent - Modern unified agent
- Elasticsearch Transforms - Built-in data transformation
Table of Contents
- Features
- Quick Start
- Installation
- Version Compatibility
- Usage
- API Reference
- Documentation
- Contributing
- License
Features
- Streaming and buffering: Files are read using streams and Elasticsearch ingestion uses buffered bulk indexing. Handles very large files (20-30 GB tested) without running out of memory.
- High throughput: Up to 20k documents/second on a single machine (2.9 GHz Intel Core i7, 16GB RAM, SSD), depending on document size. See PERFORMANCE.md for benchmarks and tuning guidance.
- Wildcard support: Ingest multiple files matching a pattern (e.g.,
logs/*.json). - Flexible sources: Read from files, Elasticsearch indices, or Node.js streams.
- Reindexing with transforms: Fetch documents from existing indices and transform them using JavaScript.
- Document splitting: Transform one source document into multiple target documents (e.g., tweets → hashtags).
- Cross-version support: Seamlessly reindex between Elasticsearch 8.x and 9.x.
Version Compatibility
| node-es-transformer | Elasticsearch Client | Elasticsearch Server | Node.js | | ----------------------- | -------------------- | -------------------- | ------- | | 1.0.0+ | 8.x and 9.x | 8.x and 9.x | 22+ | | 1.0.0-beta7 and earlier | 8.x | 8.x | 18-20 |
Multi-Version Support: Starting with v1.0.0, the library supports both Elasticsearch 8.x and 9.x through automatic version detection and client aliasing. This enables seamless reindexing between major versions (e.g., migrating from ES 8.x to 9.x). All functionality is tested in CI against multiple ES versions including cross-version reindexing scenarios.
Upgrading? See MIGRATION.md for upgrade guidance from beta versions to v1.0.0.
Installation
npm install node-es-transformer
# or
yarn add node-es-transformerUsage
Read NDJSON from a file
const transformer = require('node-es-transformer');
transformer({
fileName: 'filename.json',
targetIndexName: 'my-index',
mappings: {
properties: {
'@timestamp': {
type: 'date'
},
'first_name': {
type: 'keyword'
},
'last_name': {
type: 'keyword'
}
'full_name': {
type: 'keyword'
}
}
},
transform(line) {
return {
...line,
full_name: `${line.first_name} ${line.last_name}`
}
}
});Read CSV from a file
const transformer = require('node-es-transformer');
transformer({
fileName: 'users.csv',
sourceFormat: 'csv',
targetIndexName: 'users-index',
mappings: {
properties: {
id: { type: 'integer' },
first_name: { type: 'keyword' },
last_name: { type: 'keyword' },
full_name: { type: 'keyword' },
},
},
transform(row) {
return {
...row,
id: Number(row.id),
full_name: `${row.first_name} ${row.last_name}`,
};
},
});Read Parquet from a file
const transformer = require('node-es-transformer');
transformer({
fileName: 'users.parquet',
sourceFormat: 'parquet',
targetIndexName: 'users-index',
mappings: {
properties: {
id: { type: 'integer' },
first_name: { type: 'keyword' },
last_name: { type: 'keyword' },
full_name: { type: 'keyword' },
},
},
transform(row) {
return {
...row,
id: Number(row.id),
full_name: `${row.first_name} ${row.last_name}`,
};
},
});Read Arrow IPC from a file
const transformer = require('node-es-transformer');
transformer({
fileName: 'users.arrow',
sourceFormat: 'arrow',
targetIndexName: 'users-index',
mappings: {
properties: {
id: { type: 'integer' },
first_name: { type: 'keyword' },
last_name: { type: 'keyword' },
},
},
transform(row) {
return {
...row,
id: Number(row.id),
};
},
});Infer mappings from CSV sample
const transformer = require('node-es-transformer');
transformer({
fileName: 'users.csv',
sourceFormat: 'csv',
targetIndexName: 'users-index',
inferMappings: true,
inferMappingsOptions: {
sampleBytes: 200000,
lines_to_sample: 2000,
},
});Read from another index
const transformer = require('node-es-transformer');
transformer({
sourceIndexName: 'my-source-index',
targetIndexName: 'my-target-index',
// optional, if you skip mappings, they will be fetched from the source index.
mappings: {
properties: {
'@timestamp': {
type: 'date'
},
'first_name': {
type: 'keyword'
},
'last_name': {
type: 'keyword'
}
'full_name': {
type: 'keyword'
}
}
},
transform(doc) {
return {
...doc,
full_name: `${line.first_name} ${line.last_name}`
}
}
});Reindex from Elasticsearch 8.x to 9.x
The library automatically detects the Elasticsearch version and uses the appropriate client. This enables seamless reindexing between major versions:
const transformer = require('node-es-transformer');
// Auto-detection (recommended)
transformer({
sourceClientConfig: {
node: 'https://es8-cluster.example.com:9200',
auth: { apiKey: 'your-es8-api-key' },
},
targetClientConfig: {
node: 'https://es9-cluster.example.com:9200',
auth: { apiKey: 'your-es9-api-key' },
},
sourceIndexName: 'my-source-index',
targetIndexName: 'my-target-index',
transform(doc) {
// Optional transformation during reindexing
return doc;
},
});
// Explicit version specification (if auto-detection fails)
transformer({
sourceClientConfig: {
/* ... */
},
targetClientConfig: {
/* ... */
},
sourceClientVersion: 8, // Force ES 8.x client
targetClientVersion: 9, // Force ES 9.x client
sourceIndexName: 'my-source-index',
targetIndexName: 'my-target-index',
});
// Using pre-instantiated clients (advanced)
const { Client: Client8 } = require('es8');
const { Client: Client9 } = require('es9');
const sourceClient = new Client8({
node: 'https://es8-cluster.example.com:9200',
});
const targetClient = new Client9({
node: 'https://es9-cluster.example.com:9200',
});
transformer({
sourceClient,
targetClient,
sourceIndexName: 'my-source-index',
targetIndexName: 'my-target-index',
});Note: To use pre-instantiated clients with different ES versions, install both client versions:
npm install es9@npm:@elastic/elasticsearch@^9.2.0
npm install es8@npm:@elastic/elasticsearch@^8.17.0API Reference
Configuration Options
All options are passed to the main transformer() function.
Required Options
targetIndexName(string): The target Elasticsearch index where documents will be indexed.
Source Options
Choose one of these sources:
fileName(string): Source filename to ingest. Supports wildcards (e.g.,logs/*.json,data/*.csv,data/*.parquet,data/*.arrow).sourceIndexName(string): Source Elasticsearch index to reindex from.stream(Readable): Node.js readable stream to ingest from.sourceFormat('ndjson' | 'csv' | 'parquet' | 'arrow'): Format for file/stream sources. Default:'ndjson'.arrowexpects Arrow IPC file/stream payloads.parquetstream sources are currently buffered in memory before row iteration (file sources remain streaming by row cursor).parquetsupports ZSTD-compressed files when running on Node.js 22+ (uses the built-in zlib zstd implementation).parquetINT64 values are normalized for JSON: safe-range values become numbers, larger values become strings.
csvOptions(object): CSV parser options (delimiter, quote, columns, etc.) used whensourceFormat: 'csv'.
Client Configuration
sourceClient(Client): Pre-instantiated Elasticsearch client for source operations. If provided,sourceClientConfigis ignored.targetClient(Client): Pre-instantiated Elasticsearch client for target operations. If not provided, usessourceClientor creates from config.sourceClientConfig(object): Elasticsearch client configuration for source. Default:{ node: 'http://localhost:9200' }. Ignored ifsourceClientis provided.targetClientConfig(object): Elasticsearch client configuration for target. If not provided, usessourceClientConfig. Ignored iftargetClientis provided.sourceClientVersion(8 | 9): Force specific ES client version for source. Auto-detected if not specified.targetClientVersion(8 | 9): Force specific ES client version for target. Auto-detected if not specified.
Index Configuration
mappings(object): Elasticsearch document mappings for target index. If reindexing and not provided, mappings are copied from source index.mappingsOverride(boolean): When reindexing, applymappingson top of source index mappings. Default:false.inferMappings(boolean): Infer mappings forfileNamesources via/_text_structure/find_structure. Supported forsourceFormat: 'ndjson'andsourceFormat: 'csv'only. Ignored whenmappingsis provided. If inference returnsingest_pipeline, it is created as<targetIndexName>-inferred-pipelineand applied as the index default pipeline (unlesspipelineis explicitly set). Default:false.inferMappingsOptions(object): Options for/_text_structure/find_structure(for examplesampleBytes,lines_to_sample,delimiter,quote,has_header_row,timeout).deleteIndex(boolean): Delete target index if it exists before starting. Default:false.indexMappingTotalFieldsLimit(number): Field limit for target index (index.mapping.total_fields.limitsetting).pipeline(string): Elasticsearch ingest pipeline name to use during indexing.
When inferMappings is enabled, the target cluster must allow /_text_structure/find_structure (cluster privilege: monitor_text_structure). If inferred ingest pipelines are used, the target cluster must also allow creating ingest pipelines (_ingest/pipeline).
Performance Options
bufferSize(number): Buffer size threshold in KBytes for bulk indexing. Default:5120(5 MB).searchSize(number): Number of documents to fetch per search request when reindexing. Default:100.populatedFields(boolean): Detect which fields are actually populated in documents. Useful for optimizing indices with many mapped but unused fields. Default:false.
Processing Options
transform(function): Callback to transform documents. Signature:(doc, context?) => doc | doc[] | null | undefined.- Return transformed document
- Return array of documents to split one source into multiple targets
- Return
null/undefinedto skip document
query(object): Elasticsearch DSL query to filter source documents.splitRegex(RegExp): Line split regex for file/stream sources whensourceFormatis'ndjson'. Default:/\n/.skipHeader(boolean): Header skipping for file/stream sources.- NDJSON: skips the first non-empty line
- CSV: skips the first data line only when
csvOptions.columnsdoes not consume headers - Parquet/Arrow: ignored
- Default:
false - Applies only to
fileName/streamsources
verbose(boolean): Enable verbose logging and progress bars when using the built-in logger. Default:true.logger(object): Optional custom Pino-compatible logger. If omitted, the library creates an internal Pino logger (name: node-es-transformer) and usesLOG_LEVEL(if set) orinfo/errorbased onverbose.
Return Value
The transformer() function returns a Promise that resolves to an object with:
events(EventEmitter): Event emitter for monitoring progress.'queued': Document added to queue'indexed': Document successfully indexed'complete': All documents processed'error': Error occurred
const pino = require('pino');
const logger = pino({ name: 'my-app', level: process.env.LOG_LEVEL || 'info' });
const result = await transformer({
/* options */
});
result.events.on('complete', () => {
logger.info('Ingestion complete');
});
result.events.on('error', err => {
logger.error({ err }, 'Ingestion failed');
});TypeScript Support
Full TypeScript definitions are included. Import types for type-safe configuration:
import transformer, { TransformerOptions } from 'node-es-transformer';
const options: TransformerOptions = {
fileName: 'data.json',
targetIndexName: 'my-index',
};See examples/typescript-example.ts for more examples.
Documentation
- README.md - Getting started and API reference (you are here)
- examples/ - Practical code samples for common use cases
- VERSIONING.md - API stability guarantees and versioning policy
- PERFORMANCE.md - Benchmarks, tuning, and optimization guide
- TESTING.md - Test coverage, approach, and how to run tests
- DEPENDENCIES.md - Dependency audit and update tracking
- MIGRATION.md - Upgrading from beta to v1.0.0
- CONTRIBUTING.md - How to contribute (open an issue first!)
- DEVELOPMENT.md - Development setup and testing
- RELEASE.md - Complete release process and troubleshooting
- SECURITY.md - Security policy and vulnerability reporting
Error Handling
Always handle errors when using the library:
const pino = require('pino');
const logger = pino({ name: 'my-app', level: process.env.LOG_LEVEL || 'info' });
transformer({
/* options */
})
.then(() => logger.info('Success'))
.catch(err => logger.error({ err }, 'Transformer failed'));
// Or with async/await
try {
await transformer({
/* options */
});
logger.info('Success');
} catch (err) {
logger.error({ err }, 'Transformer failed');
}More Examples
See the examples/ directory for practical code samples covering:
- Basic file ingestion
- Reindexing with transformations
- Cross-version migration (ES 8.x → 9.x)
- Document splitting
- Wildcard file processing
- Stream-based ingestion
Contributing
Contributions are welcome! Before starting work on a PR, please open an issue to discuss your proposed changes.
- CONTRIBUTING.md - Contribution guidelines and PR process
- DEVELOPMENT.md - Development setup, testing, and release process
- SECURITY.md - Security policy and vulnerability reporting
Support
This is a single-person best-effort project. While I aim to address issues and maintain the library, response times may vary. See VERSIONING.md for details on API stability and support expectations.
Getting help:
- Check the documentation first
- Review examples/ for practical code samples
- Search existing issues
- Open a new issue with details (version, steps to reproduce, expected vs actual behavior)
