npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@tunnelhub/sdk

v3.3.0

Published

SDK for TunnelHub integration platform

Readme

TunnelHub SDK

TunnelHub SDK is a robust TypeScript library for implementing automated integrations with logging and tracing capabilities on the TunnelHub platform.

Overview

The SDK provides a foundation for building reliable data integrations with features including:

  • Delta detection and synchronization
  • Batch processing capabilities
  • Comprehensive logging and monitoring
  • Error handling and retry mechanisms
  • AWS infrastructure integration

Installation

npm install @tunnelhub/sdk

Core Concepts

Integration Flows

The SDK provides three main types of integration flows:

  1. Delta Integration Flow (DeltaIntegrationFlow)

    • Tracks changes between source and target systems
    • Handles insert, update, and delete operations
    • Maintains state between executions
    • Best for synchronization scenarios
  2. Batch Delta Integration Flow (BatchDeltaIntegrationFlow)

    • Extends Delta Integration Flow
    • Processes items in configurable batch sizes
    • Optimized for large datasets
    • Supports bulk operations
  3. No Delta Integration Flow (NoDeltaIntegrationFlow)

    • Simple one-way data transfer
    • No state tracking between executions
    • Available in single and batch variants
    • Ideal for one-time or streaming data transfers

Key Components

Automation Logs

The SDK automatically handles logging through the AutomationLog class, capturing:

  • Operation type (INSERT, UPDATE, DELETE, NODELTA, TRANSFER)
  • Status (SUCCESS, FAIL, NEUTRAL)
  • Timestamps
  • Detailed error messages
  • Operation payloads

Delta Tracking

AutomationDelta manages state between executions:

  • Stores previous execution state
  • Enables change detection
  • Persists in both DynamoDB and S3
  • Handles large datasets efficiently

Parameters & Configuration

  • Environment-specific configurations
  • Custom parameter management
  • Secure credential storage
  • System connection details

Logging Strategy Optimization

The SDK v3.0 introduces an intelligent logging strategy that automatically chooses between real-time (DynamoDB) and batch (Firehose) logging based on volume and integration characteristics.

How It Works

The SDK analyzes each integration execution and selects the optimal logging strategy:

  • Real-time (DynamoDB): For small volumes or fast-processing integrations
  • Batch (Firehose): For large volumes or slower integrations

Key Benefits

  • Cost Optimization: Significant infrastructure cost reduction
  • Performance: Eliminates 70s overhead for fast integrations
  • Reliability: 99.9% log durability
  • Intelligent: Adapts to integration characteristics

Configuration Options

class MyIntegration extends DeltaIntegrationFlow<MyType> {
  // Customize thresholds
  protected realtimeLoggingThreshold: number = 100; // Base threshold
  protected maxRealtimeItems: number = 1000; // Safety limit
  protected highNoDeltaRatioThreshold: number = 0.8; // Fast detection threshold

  // Override for known fast integrations
  protected isKnownFastIntegration(): boolean {
    return this.executionEvent.metadata?.some(m => m.key === 'processing_speed' && m.value === 'fast');
  }
}

Decision Logic

  1. ≤100 items → Always real-time
  2. >1000 items → Always batch (DynamoDB protection)
  3. 100-500 items with ≥80% noDelta ratio → Real-time (fast integration detected)
  4. Other cases → Batch (safety and cost optimization)

Monitoring

The SDK logs strategy decisions for monitoring:

[LogStrategy] Realtime for fast integration: 450 items (noDelta: 90%)
[LogStrategy] Batch mode: 1500 items > max 1000
[LogStrategy] Batch mode: 600 items (noDelta: 15%)

Performance Impact

  • Optimized resource usage: Intelligent strategy selection
  • Performance benefit: Eliminates 70s overhead for fast high-volume integrations
  • Significant time savings: Reduced execution time for applicable integrations

For detailed examples and advanced customization, see SMART_LOGGING_STRATEGY_EXAMPLES.md.

Usage Examples

Creating a Delta Integration

class MyDeltaIntegration extends DeltaIntegrationFlow<MyDataType> {
  protected async loadSourceSystemData(): Promise<MyDataType[]> {
    // Implement source system data loading
  }

  protected async loadTargetSystemData(): Promise<MyDataType[]> {
    // Implement target system data loading
  }

  protected async insertAction(item: MyDataType): Promise<IntegrationMessageReturn> {
    // Implement insert logic
  }

  protected async updateAction(oldItem: MyDataType, newItem: MyDataType): Promise<IntegrationMessageReturn> {
    // Implement update logic
  }

  protected async deleteAction(item: MyDataType): Promise<IntegrationMessageReturn> {
    // Implement delete logic
  }

  protected defineMetadata(): Array<Metadata> {
    return [
      {
        fieldName: 'id',
        fieldLabel: 'ID',
        fieldType: 'TEXT',
      },
      // Add more metadata fields
    ];
  }
}

Creating a Batch Integration

class MyBatchIntegration extends BatchDeltaIntegrationFlow<MyDataType> {
  constructor(event: ProcessorPayload, context?: LambdaContext) {
    super(event, ['id'], ['name', 'value'], context);
    this.packageSize = 100; // Set batch size
  }

  protected async batchInsertAction(items: MyDataType[]): Promise<IntegrationMessageReturnBatch[]> {
    // Implement batch insert logic
  }

  protected async batchUpdateAction(
    oldItems: MyDataType[],
    newItems: MyDataType[],
  ): Promise<IntegrationMessageReturnBatch[]> {
    // Implement batch update logic
  }

  protected async batchDeleteAction(items: MyDataType[]): Promise<IntegrationMessageReturnBatch[]> {
    // Implement batch delete logic
  }
}

Additional Features

Data Store

  • Conversion table management
  • Sequence generation
  • System configuration storage

API Integration

  • Built-in middleware for API Gateway
  • Request/response logging
  • Error handling

AWS Integration

  • DynamoDB integration
  • S3 storage
  • Lambda context handling
  • ECS task tracking

Best Practices

  1. Error Handling

    • Implement proper try-catch blocks
    • Use appropriate error statuses
    • Provide meaningful error messages
  2. Performance

    • Use batch operations for large datasets
    • Implement proper indexing in database queries
    • Monitor memory usage
  3. Logging

    • Include relevant context in logs
    • Use appropriate log levels
    • Monitor execution statistics
  4. Delta Management

    • Choose appropriate key fields
    • Implement proper change detection
    • Handle data consistency

Environment Variables

The following environment variables are required only when using the API integration features:

  • TH_TENANT_ID: Tenant identifier
  • TH_ENVIRONMENT_ID: Environment identifier
  • TH_EXPIRATION_PERIOD: Log retention period in days

Contributing

Please read our Contributing Guidelines for details on submitting pull requests.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

For detailed documentation, visit https://docs.tunnelhub.io

For support queries, contact [email protected]

Developing with AI Assistant (Skills)

This SDK includes a specialized skill for Claude AI assistants to help you build integrations more efficiently.

What is a Skill?

A skill is a specialized guide that provides AI assistants with detailed knowledge about working with TunnelHub SDK. It includes:

  • Complete integration flow patterns (Delta, Batch Delta, No Delta, No Delta Batch)
  • Parameter management strategies
  • Data operations (DataStore, Sequences, System)
  • Utility functions and best practices
  • Testing patterns and examples

Using the Skill

Skill Location: The skill is located at llms/skills/tunnelhub/ in this repository.

Installing with Claude AI:

  1. Claude AI Desktop:

    # Copy skill to Claude configuration directory
    cp -r llms/skills/tunnelhub ~/.config/opencode/skills/
  2. Claude AI CLI: Ensure the llms/skills/tunnelhub/ directory is accessible to your AI assistant.

Automatic Activation: When working on integrations using TunnelHub SDK, your AI assistant will automatically load this skill when it detects relevant context.

What the Skill Helps With

  • ✅ Choosing the right integration flow for your use case
  • ✅ Implementing integration methods correctly
  • ✅ Managing parameters (static and dynamic)
  • ✅ Working with data stores, conversion tables, and sequences
  • ✅ Configuring logging strategies (realtime vs batch)
  • ✅ Testing integrations with proper patterns
  • ✅ Debugging common issues

Example Interactions

Once the skill is configured, you can ask your AI assistant:

  • "Create a Delta integration flow to sync users from Salesforce to an ERP"
  • "Help me add batch processing to my existing integration"
  • "How do I implement parameter persistence in my integration?"
  • "Write tests for my integration following SDK patterns"

Skill Structure

llms/skills/tunnelhub/
├── SKILL.md                    # Main guide
└── references/
    ├── integration-flows.md       # All flow types with examples
    ├── logging-strategy.md        # Smart logging configuration
    ├── parameters-management.md    # Parameter usage patterns
    ├── data-operations.md        # DataStore, Sequences, System
    ├── utilities.md              # Promise utilities, validations
    ├── http-interceptor.md       # HTTP logging setup
    ├── system-configuration.md   # External system types
    └── testing-patterns.md       # Testing patterns and examples

Manual Skill Loading

If the skill doesn't load automatically, you can explicitly reference it:

When using Claude AI, mention: "Use the tunnelhub skill to help me with this integration"

Updating the Skill

When updating to a new version of the SDK, also update your local skill directory:

# From SDK repository
cp -r llms/skills/tunnelhub ~/.config/opencode/skills/