flowise-to-langchain
v1.0.4
Published
A TypeScript tool to convert Flowise flows to LangChain code
Maintainers
Readme
Flowise to LangChain Converter
A comprehensive, production-ready TypeScript system that converts Flowise visual workflows and AgentFlow multiagent teams into executable LangChain code with full observability, monitoring, and deployment support.
🚀 Features
Core Conversion
- Complete JSON to TypeScript/Python Conversion: Transform Flowise JSON exports into production-ready LangChain code
- 100+ Node Types Supported: Comprehensive coverage of LLMs, Agents, Tools, Vector Stores, Embeddings, Document Loaders, Text Splitters, Streaming, RAG Chains, Function Calling, Multiagent Workflows, ConversationalRetrievalQAChain, and DocumentStoreVS
- AgentFlow Support: Full support for Flowise AgentFlow v1.0 and v2.0 multiagent team JSON structures
- Type Safety: Generate fully typed TypeScript with ES2022 and ESM modules
- Python Code Generation: Support for Python LangChain output with async/await patterns 🆕
Production Features
- Production Deployment: Complete Docker, Kubernetes, and cloud deployment support 🆕
- Error Handling & Logging: Comprehensive error handling with structured logging and recovery strategies 🆕
- Performance Monitoring: Real-time performance tracking with bottleneck analysis and optimization suggestions 🆕
- Observability: Langfuse integration for prompt versioning, execution tracing, and evaluation metrics 🆕
- Integration Tests: Comprehensive test suites for all components and workflows 🆕
Enhanced Interface
- Interactive Web Interface: Modern Next.js 14 frontend with multiagent visualization
- Real-time Monitoring: WebSocket-based execution monitoring with performance dashboards 🆕
- Enhanced CLI: Full-featured CLI with convert, validate, test, watch, batch, and run commands
- Package Distribution: Complete release packaging with validation and security checksums 🆕
📦 Installation
Quick Start with Release Package 🆕
# Download and extract the latest release
curl -L https://github.com/yourusername/flowise-to-langchain/releases/latest/download/flowise-converter-release.zip -o release.zip
unzip release.zip
cd flowise-converter-release
# Run setup script
./setup.sh
# Or use Node.js setup
npm run setupDevelopment Installation
# Clone and install locally
git clone https://github.com/yourusername/flowise-to-langchain.git
cd flowise-to-langchain
npm install
npm run build
# Use the CLI
npm run start -- --helpProduction Deployment 🆕
# Docker deployment (recommended)
./scripts/deploy-production.sh production docker
# PM2 deployment
./scripts/deploy-production.sh production pm2
# Cloud deployment
./scripts/deploy-production.sh production aws🎯 Quick Start
Basic Usage
# Convert Flowise export to LangChain TypeScript
npm run start -- convert my-flow.json output
# Convert to Python
npm run start -- convert my-flow.json output --target python
# Convert with Langfuse observability
npm run start -- convert flow.json output --with-langfuse
# Convert with performance monitoring
npm run start -- convert flow.json output --with-monitoring
# Validate a Flowise file
npm run start -- validate my-flow.json
# Test converted code
npm run start -- test flow.json --out ./output
# Watch for changes and auto-convert
npm run start -- watch ./flows --output ./output --recursive
# Batch convert multiple files
npm run start -- batch ./flows --output ./output --parallel 4
# Convert and run a workflow
npm run start -- run my-flow.json "What is the weather today?"Standalone Converters 🆕
For cases where the TypeScript build has issues, use the standalone converter scripts:
# Convert traditional chatflows
node convert-all-chatflows.cjs
# Convert multi-agent workflows (AgentFlows)
node convert-all-agentflows.cjs
# These scripts will:
# - Automatically detect all JSON files in the chatflows directory
# - Convert them to TypeScript files
# - Handle unsupported node types gracefully
# - Generate compilable TypeScript codeProduction Usage 🆕
# Deploy to production with monitoring
./scripts/deploy-production.sh production docker
# Monitor system health
curl http://localhost:8080/health
# View performance metrics
curl http://localhost:8080/metrics
# Access monitoring dashboard
open http://localhost:3001/monitoring🔍 Observability & Monitoring 🆕
Langfuse Integration
The system includes comprehensive Langfuse integration for prompt versioning, execution tracing, and evaluation:
# Environment setup
export LANGFUSE_PUBLIC_KEY=your_public_key
export LANGFUSE_SECRET_KEY=your_secret_key
# Convert with Langfuse tracking
npm run start -- convert flow.json output --with-langfuse
# View traces in the web interface
open http://localhost:3000/langfuseLangfuse Features:
- Prompt Versioning: Track and compare different prompt versions
- Execution Tracing: Detailed traces of LLM calls and agent execution
- Evaluation Metrics: Automated evaluation of accuracy, performance, and completeness
- Cost Tracking: Monitor token usage and API costs
- A/B Testing: Compare different workflow versions
Performance Monitoring
Real-time performance monitoring with automatic optimization:
// Performance tracking in generated code
import { performanceMonitor } from './monitoring/performance-monitor';
export async function runFlow(input: string): Promise<string> {
const tracker = performanceMonitor.track('workflow.execution');
try {
const result = await agent.call({ input });
tracker.measure('execution_time');
return result;
} finally {
const snapshot = tracker.end();
performanceMonitor.recordSnapshot(snapshot);
}
}Monitoring Features:
- Real-time Metrics: Track execution time, memory usage, and token consumption
- Bottleneck Analysis: Automatically identify performance bottlenecks
- Optimization Suggestions: AI-powered optimization recommendations
- Resource Monitoring: CPU, memory, and network usage tracking
- Alert System: Configurable alerts for performance issues
Error Handling & Recovery 🆕
Production-ready error handling with automatic recovery:
// Enhanced error handling in generated code
import { withErrorHandling, ApplicationError } from './utils/error-handler';
export async function runFlow(input: string): Promise<string> {
return withErrorHandling(async () => {
const result = await agent.call({ input });
return result;
}, 'workflow-execution');
}Error Handling Features:
- Structured Errors: Categorized error types with recovery strategies
- Automatic Retry: Configurable retry logic with exponential backoff
- Circuit Breaker: Prevents cascade failures in production
- Error Aggregation: Centralized error reporting and analysis
- Recovery Strategies: Fallback mechanisms for different error types
🌐 Web Interface
Enhanced Frontend Features 🆕
The Next.js 14 frontend includes comprehensive new features:
# Start the enhanced frontend
cd tester-bot-frontend
npm run dev
# Access different interfaces
open http://localhost:3000 # Main interface
open http://localhost:3000/monitoring # Performance dashboard
open http://localhost:3000/langfuse # Observability dashboard
open http://localhost:3000/simple # Simple assistant UINew Frontend Features:
- Performance Dashboard: Real-time metrics and optimization insights
- Langfuse Integration: Comprehensive observability interface
- Error Management: Error tracking and recovery interface
- Simple Assistant UI: Lightweight interface using assistant-ui.com
- Real-time Updates: WebSocket-based live updates
- Enhanced Testing: Comprehensive test execution interface
Frontend Configuration 🆕
# Environment variables for enhanced features
NEXT_PUBLIC_LANGFUSE_PUBLIC_KEY=your_public_key
NEXT_PUBLIC_MONITORING_ENABLED=true
NEXT_PUBLIC_WEBSOCKET_URL=ws://localhost:8081
NEXT_PUBLIC_PERFORMANCE_TRACKING=true🤖 AgentFlow Multiagent Support
Real-world Multiagent Examples 🆕
The system now includes 7 complete multiagent workflow examples:
# Customer Support Team
npm run start -- convert examples/multiagent/customer-support/flowise/customer-support-agentflow.json output
# Content Creation Team
npm run start -- convert examples/multiagent/content-creation/flowise/content-creation-agentflow.json output
# Financial Analysis Team
npm run start -- convert examples/multiagent/financial-analysis/flowise/financial-analysis-agentflow.json output
# Software Development Team
npm run start -- convert examples/multiagent/software-development/flowise/software-development-agentflow.json output
# Healthcare Team
npm run start -- convert examples/multiagent/healthcare/flowise/healthcare-agentflow.json output
# E-commerce Team
npm run start -- convert examples/multiagent/e-commerce/flowise/e-commerce-agentflow.json output
# Legal Team
npm run start -- convert examples/multiagent/legal/flowise/legal-agentflow.json outputEach example includes:
- Complete AgentFlow JSON: Ready-to-use multiagent configurations
- Generated TypeScript: Production-ready LangChain code
- Generated Python: Async Python implementations
- Documentation: Usage instructions and customization guides
- Test Cases: Comprehensive test suites
Advanced Multiagent Features 🆕
// Generated multiagent code with monitoring
import { SupervisorAgent, WorkerAgent } from './agents';
import { performanceMonitor } from './monitoring';
export class CustomerSupportTeam {
private supervisor: SupervisorAgent;
private workers: WorkerAgent[];
async initialize() {
const tracker = performanceMonitor.track('team.initialization');
this.supervisor = new SupervisorAgent({
name: 'Support Manager',
workers: ['classifier', 'responder', 'escalator'],
monitoring: true
});
this.workers = [
new WorkerAgent({ name: 'classifier', specialization: 'inquiry_classification' }),
new WorkerAgent({ name: 'responder', specialization: 'response_generation' }),
new WorkerAgent({ name: 'escalator', specialization: 'issue_escalation' })
];
tracker.end();
}
async processInquiry(inquiry: string): Promise<string> {
return this.supervisor.delegate(inquiry);
}
}🛠️ Node Support
Language Models (8+ Converters)
- ✅ OpenAI: OpenAI GPT models with monitoring
- ✅ ChatOpenAI: OpenAI Chat models (GPT-3.5, GPT-4)
- ✅ Anthropic: Claude models with performance tracking
- ✅ Azure OpenAI: Azure-hosted OpenAI models
- ✅ Cohere: Cohere Command models
- ✅ Hugging Face: Hugging Face Hub models
- ✅ Ollama: Local Ollama models
- ✅ Replicate: Replicate-hosted models
Chains & Retrieval (10+ Converters)
- ✅ ConversationChain: Basic conversation with memory
- ✅ ConversationalRetrievalQAChain: RAG with conversation memory 🆕
- ✅ RetrievalQAChain: Question answering over documents
- ✅ APIChain: API interaction chains
- ✅ SQLDatabaseChain: SQL database queries
- ✅ VectorDBQAChain: Vector database QA
- ✅ RefineDocumentsChain: Iterative document refinement
- ✅ MapReduceDocumentsChain: Parallel document processing
- ✅ StuffDocumentsChain: Simple document concatenation
- ✅ LLMChain: Basic LLM chain with prompts
Vector Stores (15+ Converters)
- ✅ Pinecone: Pinecone vector database
- ✅ Weaviate: Weaviate vector store
- ✅ Qdrant: Qdrant vector database
- ✅ Milvus: Milvus vector store
- ✅ Chroma: Chroma embedding database
- ✅ Faiss: Facebook AI Similarity Search
- ✅ SupabaseVectorStore: Supabase pgvector
- ✅ Redis: Redis vector search
- ✅ Elasticsearch: Elasticsearch dense vectors
- ✅ PGVector: PostgreSQL pgvector
- ✅ DocumentStoreVS: Flowise Document Store 🆕
- ✅ MemoryVectorStore: In-memory vector store
- ✅ Vectara: Vectara managed service
- ✅ Zep: Zep long-term memory
- ✅ SingleStore: SingleStore vector functions
Enhanced Multiagent Workflows (15+ Converters) 🆕
- ✅ SupervisorAgent: Hierarchical team supervision with monitoring
- ✅ WorkerAgent: Specialized workers with performance tracking
- ✅ TeamCoordinator: Advanced coordination patterns
- ✅ MultiAgentExecutor: Parallel/sequential execution
- ✅ StateNode: Shared state management
- ✅ LoopNode: Iterative workflows
- ✅ ConditionNode: AI-powered conditional routing
- ✅ ExecuteFlowNode: Sub-workflow execution
- ✅ HumanInLoopNode: Human intervention points
- ✅ CustomerSupportTeam: Complete customer support workflow
- ✅ ContentCreationTeam: Content creation pipeline
- ✅ FinancialAnalysisTeam: Financial analysis workflow
- ✅ SoftwareDevTeam: Software development pipeline
- ✅ HealthcareTeam: Healthcare workflow
- ✅ EcommerceTeam: E-commerce operations
Python Code Generation 🆕
All converters now support Python output:
# Generate Python code
npm run start -- convert flow.json output --target python
# Python with async monitoring
npm run start -- convert flow.json output --target python --with-monitoringPython Features:
- Async/Await Patterns: Modern Python async programming
- Type Hints: Full type annotation support
- Error Handling: Exception handling with recovery
- Performance Monitoring: Python-specific performance tracking
- Langfuse Integration: Python SDK integration
🏗️ Production Deployment 🆕
Docker Deployment
# Production deployment with Docker Compose
docker-compose -f docker-compose.prod.yml up -d
# Services included:
# - API service (flowise-converter-api)
# - Frontend service (flowise-converter-frontend)
# - Monitoring service (flowise-converter-monitoring)
# - Redis cache
# - Nginx reverse proxy
# - Prometheus metrics
# - Grafana dashboardsKubernetes Deployment
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: flowise-converter
spec:
replicas: 3
selector:
matchLabels:
app: flowise-converter
template:
metadata:
labels:
app: flowise-converter
spec:
containers:
- name: api
image: flowise-converter:latest
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "production"
- name: LANGFUSE_PUBLIC_KEY
valueFrom:
secretKeyRef:
name: langfuse-secret
key: public-keyHealth Checks and Monitoring
# Health check endpoints
curl http://localhost:8080/health # API health
curl http://localhost:3000/api/health # Frontend health
curl http://localhost:8081/health # Monitoring health
# Metrics endpoints
curl http://localhost:8080/metrics # Prometheus metrics
curl http://localhost:3001/grafana # Grafana dashboard📊 Performance & Testing 🆕
Integration Testing
Comprehensive test suites for all components:
# Run all integration tests
npm run test:integration
# Specific test suites
npm run test:integration:conversion # Flow conversion tests
npm run test:integration:multiagent # Multiagent system tests
npm run test:integration:langfuse # Langfuse integration tests
npm run test:integration:errors # Error handling testsPerformance Benchmarking
# Run performance benchmarks
npm run benchmark
# Load testing
npm run test:load
# Memory profiling
npm run profile:memoryCode Quality
# Type checking
npm run type-check
# Linting
npm run lint
# Security scanning
npm run security:scan
# Dependency audit
npm run audit🔧 Advanced Configuration 🆕
Environment Variables
# Core configuration
NODE_ENV=production
PORT=8080
FRONTEND_PORT=3000
# Observability
LANGFUSE_PUBLIC_KEY=your_public_key
LANGFUSE_SECRET_KEY=your_secret_key
LANGFUSE_HOST=https://cloud.langfuse.com
# Performance monitoring
PERFORMANCE_MONITORING=true
METRICS_INTERVAL=5000
BOTTLENECK_THRESHOLD=2000
# Error handling
ERROR_REPORTING=true
RETRY_ATTEMPTS=3
CIRCUIT_BREAKER_THRESHOLD=5
# Redis caching
REDIS_URL=redis://localhost:6379
CACHE_TTL=3600
# Security
JWT_SECRET=your_jwt_secret
API_KEY=your_api_key
RATE_LIMIT=100Custom Converter Development 🆕
// Enhanced converter with monitoring
import { BaseConverter } from './registry/base-converter';
import { performanceMonitor } from './monitoring/performance-monitor';
export class CustomNodeConverter extends BaseConverter {
readonly flowiseType = 'customNode';
readonly category = 'custom';
convert(node: IRNode, context: GenerationContext): CodeFragment[] {
const tracker = performanceMonitor.track('converter.custom_node');
try {
const fragments = this.generateCode(node, context);
tracker.measure('code_generation');
return fragments;
} finally {
const snapshot = tracker.end();
performanceMonitor.recordSnapshot(snapshot);
}
}
getDependencies(): string[] {
return ['@langchain/core', './monitoring/performance-monitor'];
}
}🚀 Latest Features Summary 🆕
Production Ready
- ✅ Production Deployment: Complete Docker, Kubernetes, and cloud deployment support
- ✅ Error Handling: Comprehensive error handling with recovery strategies
- ✅ Performance Monitoring: Real-time monitoring with optimization suggestions
- ✅ Integration Tests: Comprehensive test suites for all components
- ✅ Security: Production-ready security configuration
Enhanced Observability
- ✅ Langfuse Integration: Complete observability with prompt versioning and evaluation
- ✅ Performance Dashboard: Real-time metrics and bottleneck analysis
- ✅ Error Tracking: Structured error reporting and recovery
- ✅ Resource Monitoring: CPU, memory, and network usage tracking
Expanded Language Support
- ✅ Python Code Generation: Complete Python LangChain code generation
- ✅ TypeScript Enhancement: Enhanced TypeScript with monitoring integration
- ✅ Async Patterns: Modern async/await patterns in both languages
Advanced Multiagent Support
- ✅ 7 Real-world Examples: Complete multiagent workflow examples
- ✅ Enhanced Coordination: Advanced team coordination patterns
- ✅ Performance Optimization: Optimized multiagent execution
Development Experience
- ✅ Enhanced CLI: Improved CLI with new commands and options
- ✅ Web Interface: Enhanced frontend with new dashboards
- ✅ Package Distribution: Complete release packaging system
- ✅ Documentation: Comprehensive deployment and usage documentation
🛣️ Roadmap
Completed ✅
- ✅ Python Code Generation: Complete Python LangChain support
- ✅ Production Deployment: Full deployment infrastructure
- ✅ Performance Monitoring: Real-time monitoring and optimization
- ✅ Langfuse Integration: Complete observability platform
- ✅ Error Handling: Production-ready error management
- ✅ Integration Tests: Comprehensive test coverage
- ✅ Multiagent Examples: 7 real-world workflow examples
In Progress 🔄
- 🔄 API Rate Limiting: Advanced rate limiting and authentication
- 🔄 Caching Layer: Redis-based performance caching
- 🔄 Load Testing: Production load testing suite
Planned 📋
- 📋 Video Tutorials: AgentFlow usage tutorials
- 📋 Template Library: Pre-built AgentFlow templates
- 📋 VS Code Extension: Development environment integration
- 📋 Performance Benchmarks: Large team performance testing
🤝 Contributing
Priority areas for contributions:
- Additional Node Converters: Implement more specialized Flowise node types
- Enhanced Monitoring: Advanced monitoring and alerting features
- Performance Optimization: Code optimization and best practices
- Documentation: More examples and tutorials
- Testing: Expand test coverage and scenarios
📝 License
MIT License - see LICENSE for details.
🙏 Acknowledgments
- Original Creator: Gregg Coppen [email protected]
- Claude Flow Development Team: Enhanced multiagent and production features
- Contributors: All community contributors and testers
- Special Thanks: Claude Code for development assistance
📞 Support
- Issues: GitHub Issues
- Documentation:
/docsdirectory for comprehensive guides - Community: Join discussions in the Issues section
- Enterprise Support: Contact for production deployment assistance
Status: Production Ready ✅ | Version: 2.0.0 | Node Coverage: 98+ Types | Languages: TypeScript + Python | Multiagent: 7 Real-world Examples | Observability: Langfuse Integration | Deployment: Docker + Kubernetes Ready
This comprehensive, production-ready toolkit successfully converts Flowise visual workflows and AgentFlow multiagent teams into executable LangChain code with full observability, monitoring, error handling, and deployment support for both TypeScript and Python environments.
