@tunnelhub/sdk
v3.3.0
Published
SDK for TunnelHub integration platform
Readme
TunnelHub SDK
TunnelHub SDK is a robust TypeScript library for implementing automated integrations with logging and tracing capabilities on the TunnelHub platform.
Overview
The SDK provides a foundation for building reliable data integrations with features including:
- Delta detection and synchronization
- Batch processing capabilities
- Comprehensive logging and monitoring
- Error handling and retry mechanisms
- AWS infrastructure integration
Installation
npm install @tunnelhub/sdkCore Concepts
Integration Flows
The SDK provides three main types of integration flows:
Delta Integration Flow (
DeltaIntegrationFlow)- Tracks changes between source and target systems
- Handles insert, update, and delete operations
- Maintains state between executions
- Best for synchronization scenarios
Batch Delta Integration Flow (
BatchDeltaIntegrationFlow)- Extends Delta Integration Flow
- Processes items in configurable batch sizes
- Optimized for large datasets
- Supports bulk operations
No Delta Integration Flow (
NoDeltaIntegrationFlow)- Simple one-way data transfer
- No state tracking between executions
- Available in single and batch variants
- Ideal for one-time or streaming data transfers
Key Components
Automation Logs
The SDK automatically handles logging through the AutomationLog class, capturing:
- Operation type (INSERT, UPDATE, DELETE, NODELTA, TRANSFER)
- Status (SUCCESS, FAIL, NEUTRAL)
- Timestamps
- Detailed error messages
- Operation payloads
Delta Tracking
AutomationDelta manages state between executions:
- Stores previous execution state
- Enables change detection
- Persists in both DynamoDB and S3
- Handles large datasets efficiently
Parameters & Configuration
- Environment-specific configurations
- Custom parameter management
- Secure credential storage
- System connection details
Logging Strategy Optimization
The SDK v3.0 introduces an intelligent logging strategy that automatically chooses between real-time (DynamoDB) and batch (Firehose) logging based on volume and integration characteristics.
How It Works
The SDK analyzes each integration execution and selects the optimal logging strategy:
- Real-time (DynamoDB): For small volumes or fast-processing integrations
- Batch (Firehose): For large volumes or slower integrations
Key Benefits
- Cost Optimization: Significant infrastructure cost reduction
- Performance: Eliminates 70s overhead for fast integrations
- Reliability: 99.9% log durability
- Intelligent: Adapts to integration characteristics
Configuration Options
class MyIntegration extends DeltaIntegrationFlow<MyType> {
// Customize thresholds
protected realtimeLoggingThreshold: number = 100; // Base threshold
protected maxRealtimeItems: number = 1000; // Safety limit
protected highNoDeltaRatioThreshold: number = 0.8; // Fast detection threshold
// Override for known fast integrations
protected isKnownFastIntegration(): boolean {
return this.executionEvent.metadata?.some(m => m.key === 'processing_speed' && m.value === 'fast');
}
}Decision Logic
- ≤100 items → Always real-time
- >1000 items → Always batch (DynamoDB protection)
- 100-500 items with ≥80% noDelta ratio → Real-time (fast integration detected)
- Other cases → Batch (safety and cost optimization)
Monitoring
The SDK logs strategy decisions for monitoring:
[LogStrategy] Realtime for fast integration: 450 items (noDelta: 90%)
[LogStrategy] Batch mode: 1500 items > max 1000
[LogStrategy] Batch mode: 600 items (noDelta: 15%)Performance Impact
- Optimized resource usage: Intelligent strategy selection
- Performance benefit: Eliminates 70s overhead for fast high-volume integrations
- Significant time savings: Reduced execution time for applicable integrations
For detailed examples and advanced customization, see SMART_LOGGING_STRATEGY_EXAMPLES.md.
Usage Examples
Creating a Delta Integration
class MyDeltaIntegration extends DeltaIntegrationFlow<MyDataType> {
protected async loadSourceSystemData(): Promise<MyDataType[]> {
// Implement source system data loading
}
protected async loadTargetSystemData(): Promise<MyDataType[]> {
// Implement target system data loading
}
protected async insertAction(item: MyDataType): Promise<IntegrationMessageReturn> {
// Implement insert logic
}
protected async updateAction(oldItem: MyDataType, newItem: MyDataType): Promise<IntegrationMessageReturn> {
// Implement update logic
}
protected async deleteAction(item: MyDataType): Promise<IntegrationMessageReturn> {
// Implement delete logic
}
protected defineMetadata(): Array<Metadata> {
return [
{
fieldName: 'id',
fieldLabel: 'ID',
fieldType: 'TEXT',
},
// Add more metadata fields
];
}
}Creating a Batch Integration
class MyBatchIntegration extends BatchDeltaIntegrationFlow<MyDataType> {
constructor(event: ProcessorPayload, context?: LambdaContext) {
super(event, ['id'], ['name', 'value'], context);
this.packageSize = 100; // Set batch size
}
protected async batchInsertAction(items: MyDataType[]): Promise<IntegrationMessageReturnBatch[]> {
// Implement batch insert logic
}
protected async batchUpdateAction(
oldItems: MyDataType[],
newItems: MyDataType[],
): Promise<IntegrationMessageReturnBatch[]> {
// Implement batch update logic
}
protected async batchDeleteAction(items: MyDataType[]): Promise<IntegrationMessageReturnBatch[]> {
// Implement batch delete logic
}
}Additional Features
Data Store
- Conversion table management
- Sequence generation
- System configuration storage
API Integration
- Built-in middleware for API Gateway
- Request/response logging
- Error handling
AWS Integration
- DynamoDB integration
- S3 storage
- Lambda context handling
- ECS task tracking
Best Practices
Error Handling
- Implement proper try-catch blocks
- Use appropriate error statuses
- Provide meaningful error messages
Performance
- Use batch operations for large datasets
- Implement proper indexing in database queries
- Monitor memory usage
Logging
- Include relevant context in logs
- Use appropriate log levels
- Monitor execution statistics
Delta Management
- Choose appropriate key fields
- Implement proper change detection
- Handle data consistency
Environment Variables
The following environment variables are required only when using the API integration features:
TH_TENANT_ID: Tenant identifierTH_ENVIRONMENT_ID: Environment identifierTH_EXPIRATION_PERIOD: Log retention period in days
Contributing
Please read our Contributing Guidelines for details on submitting pull requests.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Support
For detailed documentation, visit https://docs.tunnelhub.io
For support queries, contact [email protected]
Developing with AI Assistant (Skills)
This SDK includes a specialized skill for Claude AI assistants to help you build integrations more efficiently.
What is a Skill?
A skill is a specialized guide that provides AI assistants with detailed knowledge about working with TunnelHub SDK. It includes:
- Complete integration flow patterns (Delta, Batch Delta, No Delta, No Delta Batch)
- Parameter management strategies
- Data operations (DataStore, Sequences, System)
- Utility functions and best practices
- Testing patterns and examples
Using the Skill
Skill Location: The skill is located at llms/skills/tunnelhub/ in this repository.
Installing with Claude AI:
Claude AI Desktop:
# Copy skill to Claude configuration directory cp -r llms/skills/tunnelhub ~/.config/opencode/skills/Claude AI CLI: Ensure the
llms/skills/tunnelhub/directory is accessible to your AI assistant.
Automatic Activation: When working on integrations using TunnelHub SDK, your AI assistant will automatically load this skill when it detects relevant context.
What the Skill Helps With
- ✅ Choosing the right integration flow for your use case
- ✅ Implementing integration methods correctly
- ✅ Managing parameters (static and dynamic)
- ✅ Working with data stores, conversion tables, and sequences
- ✅ Configuring logging strategies (realtime vs batch)
- ✅ Testing integrations with proper patterns
- ✅ Debugging common issues
Example Interactions
Once the skill is configured, you can ask your AI assistant:
- "Create a Delta integration flow to sync users from Salesforce to an ERP"
- "Help me add batch processing to my existing integration"
- "How do I implement parameter persistence in my integration?"
- "Write tests for my integration following SDK patterns"
Skill Structure
llms/skills/tunnelhub/
├── SKILL.md # Main guide
└── references/
├── integration-flows.md # All flow types with examples
├── logging-strategy.md # Smart logging configuration
├── parameters-management.md # Parameter usage patterns
├── data-operations.md # DataStore, Sequences, System
├── utilities.md # Promise utilities, validations
├── http-interceptor.md # HTTP logging setup
├── system-configuration.md # External system types
└── testing-patterns.md # Testing patterns and examplesManual Skill Loading
If the skill doesn't load automatically, you can explicitly reference it:
When using Claude AI, mention: "Use the tunnelhub skill to help me with this integration"Updating the Skill
When updating to a new version of the SDK, also update your local skill directory:
# From SDK repository
cp -r llms/skills/tunnelhub ~/.config/opencode/skills/