@thejr551/piece-litellm
v0.1.1
Published
Activepieces plugin for LiteLLM - unified interface for 100+ LLMs
Maintainers
Readme
LiteLLM Plugin for Activepieces
A comprehensive Activepieces plugin that provides a unified interface to 100+ LLMs through LiteLLM. This plugin allows you to use OpenAI, Anthropic Claude, Google Gemini, Azure OpenAI, and many other LLM providers with a single, consistent API.
Features
This plugin includes all the essential AI actions you need:
🤖 Core Actions
- Ask LiteLLM - Send prompts to any LLM with conversation memory support
- Classify Text - Classify text into predefined categories with confidence scores
- Extract Structured Data - Extract structured data from unstructured text
- Vision Prompt - Analyze images with vision-capable models
- Generate Image - Create images using DALL-E or compatible models
- Text to Speech - Convert text to speech audio
- Transcribe Audio - Convert audio to text using Whisper or compatible models
- Translate Audio - Translate audio to English
- Custom API Call - Make custom calls to any LiteLLM endpoint
Supported Models
LiteLLM supports 100+ models including:
- OpenAI: gpt-4, gpt-4-turbo, gpt-3.5-turbo, dall-e-3, whisper-1, tts-1
- Anthropic: claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307
- Google: gemini-pro, gemini-pro-vision, gemini-1.5-pro
- Azure OpenAI: azure/gpt-4, azure/gpt-35-turbo
- Cohere: command-r, command-r-plus
- Mistral AI: mistral-large, mistral-medium, mistral-small
- Groq: llama3-70b, mixtral-8x7b
- And many more!
Quick Testing with Docker
Test the plugin in isolation using Docker:
# Build and run tests
make build
make test
# Or using docker-compose directly (inside tests/)
cd tests && docker-compose up --abort-on-container-exitSee tests/DOCKER_TESTING.md for detailed Docker testing instructions.
Installation
Prerequisites
LiteLLM Server: You need a running LiteLLM instance. You can:
- Run it locally:
pip install litellm && litellm --port 4000 - Deploy it on your infrastructure
- Use a hosted LiteLLM service
- Run it locally:
API Keys: Get API keys for the LLM providers you want to use
Setting up LiteLLM
- Install LiteLLM:
pip install litellm- Create a
litellm_config.yamlfile:
model_list:
- model_name: gpt-4
litellm_params:
model: gpt-4
api_key: os.environ/OPENAI_API_KEY
- model_name: claude-3-opus
litellm_params:
model: claude-3-opus-20240229
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: gemini-pro
litellm_params:
model: gemini/gemini-pro
api_key: os.environ/GEMINI_API_KEY- Start LiteLLM:
litellm --config litellm_config.yaml --port 4000Installing the Plugin in Activepieces
- Copy this plugin to your Activepieces pieces directory:
cp -r litellm-plugin packages/pieces/community/litellm- Install dependencies:
cd packages/pieces/community/litellm
npm install- Build the plugin:
npm run build- Restart your Activepieces instance
Configuration
When using the plugin in Activepieces, you'll need to configure:
- API Key: Your LiteLLM API key or the API key for your chosen provider
- Base URL: Your LiteLLM proxy URL (default:
http://localhost:4000)
Usage Examples
1. Ask LiteLLM (Chat Completion)
Send prompts to any LLM with optional conversation memory:
{
"model": "gpt-4",
"prompt": "Explain quantum computing in simple terms",
"temperature": 0.7,
"maxTokens": 500,
"memoryKey": "conversation_1" // Optional: maintains conversation history
}2. Classify Text
Classify text into predefined categories:
{
"model": "gpt-3.5-turbo",
"text": "I love this product! It works great and exceeded my expectations.",
"categories": [
{ "name": "positive", "description": "Positive sentiment" },
{ "name": "negative", "description": "Negative sentiment" },
{ "name": "neutral", "description": "Neutral sentiment" }
],
"includeConfidence": true
}3. Extract Structured Data
Extract structured information from unstructured text:
{
"model": "gpt-4",
"text": "John Doe, age 30, lives in New York and works as a software engineer.",
"params": [
{ "propName": "name", "propDataType": "string", "propIsRequired": true },
{ "propName": "age", "propDataType": "number", "propIsRequired": true },
{ "propName": "city", "propDataType": "string", "propIsRequired": true },
{ "propName": "occupation", "propDataType": "string", "propIsRequired": true }
]
}4. Vision Prompt
Analyze images with vision-capable models:
{
"model": "gpt-4-vision-preview",
"prompt": "What's in this image?",
"images": [
{ "url": "https://example.com/image.jpg" }
]
}5. Generate Image
Create images using DALL-E or compatible models:
{
"model": "dall-e-3",
"prompt": "A serene landscape with mountains and a lake at sunset",
"size": "1024x1024",
"quality": "hd"
}Environment Variables
You can configure the default LiteLLM base URL using an environment variable:
LITELLM_BASE_URL=http://your-litellm-instance:4000Development
Project Structure
litellm-plugin/
├── src/
│ ├── index.ts # Main plugin definition
│ └── lib/
│ ├── common/
│ │ └── common.ts # Shared utilities
│ └── actions/
│ ├── send-prompt.ts # Ask LiteLLM action
│ ├── classify-text.ts # Classify text action
│ ├── extract-structured-data.ts # Extract data action
│ ├── vision-prompt.ts # Vision analysis action
│ ├── generate-image.ts # Image generation action
│ ├── text-to-speech.ts # TTS action
│ ├── transcription.ts # Audio transcription action
│ └── translation.ts # Audio translation action
├── package.json
└── README.mdBuilding from Source
# Install dependencies
npm install
# Build the plugin
npm run build
# Run tests (if available)
npm testTroubleshooting
Connection Issues
If you're having trouble connecting to LiteLLM:
- Verify LiteLLM is running:
curl http://localhost:4000/health - Check your API key is correct
- Ensure the base URL is accessible from your Activepieces instance
- Check firewall rules if using a remote LiteLLM instance
Model Not Found
If you get "model not found" errors:
- Verify the model is configured in your
litellm_config.yaml - Check that you have the correct API key for that provider
- Ensure the model name matches exactly (case-sensitive)
Rate Limiting
If you're hitting rate limits:
- Implement retry logic in your flows
- Use the delay action between requests
- Consider using different models or providers
- Check your provider's rate limits
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
License
MIT License - see LICENSE file for details
Resources
Support
For issues and questions:
- LiteLLM issues: LiteLLM GitHub Issues
- Activepieces issues: Activepieces GitHub Issues
- Plugin issues: Create an issue in this repository
