npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@thejr551/piece-litellm

v0.1.1

Published

Activepieces plugin for LiteLLM - unified interface for 100+ LLMs

Readme

LiteLLM Plugin for Activepieces

A comprehensive Activepieces plugin that provides a unified interface to 100+ LLMs through LiteLLM. This plugin allows you to use OpenAI, Anthropic Claude, Google Gemini, Azure OpenAI, and many other LLM providers with a single, consistent API.

Features

This plugin includes all the essential AI actions you need:

🤖 Core Actions

  • Ask LiteLLM - Send prompts to any LLM with conversation memory support
  • Classify Text - Classify text into predefined categories with confidence scores
  • Extract Structured Data - Extract structured data from unstructured text
  • Vision Prompt - Analyze images with vision-capable models
  • Generate Image - Create images using DALL-E or compatible models
  • Text to Speech - Convert text to speech audio
  • Transcribe Audio - Convert audio to text using Whisper or compatible models
  • Translate Audio - Translate audio to English
  • Custom API Call - Make custom calls to any LiteLLM endpoint

Supported Models

LiteLLM supports 100+ models including:

  • OpenAI: gpt-4, gpt-4-turbo, gpt-3.5-turbo, dall-e-3, whisper-1, tts-1
  • Anthropic: claude-3-opus-20240229, claude-3-sonnet-20240229, claude-3-haiku-20240307
  • Google: gemini-pro, gemini-pro-vision, gemini-1.5-pro
  • Azure OpenAI: azure/gpt-4, azure/gpt-35-turbo
  • Cohere: command-r, command-r-plus
  • Mistral AI: mistral-large, mistral-medium, mistral-small
  • Groq: llama3-70b, mixtral-8x7b
  • And many more!

Quick Testing with Docker

Test the plugin in isolation using Docker:

# Build and run tests
make build
make test

# Or using docker-compose directly (inside tests/)
cd tests && docker-compose up --abort-on-container-exit

See tests/DOCKER_TESTING.md for detailed Docker testing instructions.

Installation

Prerequisites

  1. LiteLLM Server: You need a running LiteLLM instance. You can:

    • Run it locally: pip install litellm && litellm --port 4000
    • Deploy it on your infrastructure
    • Use a hosted LiteLLM service
  2. API Keys: Get API keys for the LLM providers you want to use

Setting up LiteLLM

  1. Install LiteLLM:
pip install litellm
  1. Create a litellm_config.yaml file:
model_list:
  - model_name: gpt-4
    litellm_params:
      model: gpt-4
      api_key: os.environ/OPENAI_API_KEY
  
  - model_name: claude-3-opus
    litellm_params:
      model: claude-3-opus-20240229
      api_key: os.environ/ANTHROPIC_API_KEY
  
  - model_name: gemini-pro
    litellm_params:
      model: gemini/gemini-pro
      api_key: os.environ/GEMINI_API_KEY
  1. Start LiteLLM:
litellm --config litellm_config.yaml --port 4000

Installing the Plugin in Activepieces

  1. Copy this plugin to your Activepieces pieces directory:
cp -r litellm-plugin packages/pieces/community/litellm
  1. Install dependencies:
cd packages/pieces/community/litellm
npm install
  1. Build the plugin:
npm run build
  1. Restart your Activepieces instance

Configuration

When using the plugin in Activepieces, you'll need to configure:

  1. API Key: Your LiteLLM API key or the API key for your chosen provider
  2. Base URL: Your LiteLLM proxy URL (default: http://localhost:4000)

Usage Examples

1. Ask LiteLLM (Chat Completion)

Send prompts to any LLM with optional conversation memory:

{
  "model": "gpt-4",
  "prompt": "Explain quantum computing in simple terms",
  "temperature": 0.7,
  "maxTokens": 500,
  "memoryKey": "conversation_1" // Optional: maintains conversation history
}

2. Classify Text

Classify text into predefined categories:

{
  "model": "gpt-3.5-turbo",
  "text": "I love this product! It works great and exceeded my expectations.",
  "categories": [
    { "name": "positive", "description": "Positive sentiment" },
    { "name": "negative", "description": "Negative sentiment" },
    { "name": "neutral", "description": "Neutral sentiment" }
  ],
  "includeConfidence": true
}

3. Extract Structured Data

Extract structured information from unstructured text:

{
  "model": "gpt-4",
  "text": "John Doe, age 30, lives in New York and works as a software engineer.",
  "params": [
    { "propName": "name", "propDataType": "string", "propIsRequired": true },
    { "propName": "age", "propDataType": "number", "propIsRequired": true },
    { "propName": "city", "propDataType": "string", "propIsRequired": true },
    { "propName": "occupation", "propDataType": "string", "propIsRequired": true }
  ]
}

4. Vision Prompt

Analyze images with vision-capable models:

{
  "model": "gpt-4-vision-preview",
  "prompt": "What's in this image?",
  "images": [
    { "url": "https://example.com/image.jpg" }
  ]
}

5. Generate Image

Create images using DALL-E or compatible models:

{
  "model": "dall-e-3",
  "prompt": "A serene landscape with mountains and a lake at sunset",
  "size": "1024x1024",
  "quality": "hd"
}

Environment Variables

You can configure the default LiteLLM base URL using an environment variable:

LITELLM_BASE_URL=http://your-litellm-instance:4000

Development

Project Structure

litellm-plugin/
├── src/
│   ├── index.ts                          # Main plugin definition
│   └── lib/
│       ├── common/
│       │   └── common.ts                 # Shared utilities
│       └── actions/
│           ├── send-prompt.ts            # Ask LiteLLM action
│           ├── classify-text.ts          # Classify text action
│           ├── extract-structured-data.ts # Extract data action
│           ├── vision-prompt.ts          # Vision analysis action
│           ├── generate-image.ts         # Image generation action
│           ├── text-to-speech.ts         # TTS action
│           ├── transcription.ts          # Audio transcription action
│           └── translation.ts            # Audio translation action
├── package.json
└── README.md

Building from Source

# Install dependencies
npm install

# Build the plugin
npm run build

# Run tests (if available)
npm test

Troubleshooting

Connection Issues

If you're having trouble connecting to LiteLLM:

  1. Verify LiteLLM is running: curl http://localhost:4000/health
  2. Check your API key is correct
  3. Ensure the base URL is accessible from your Activepieces instance
  4. Check firewall rules if using a remote LiteLLM instance

Model Not Found

If you get "model not found" errors:

  1. Verify the model is configured in your litellm_config.yaml
  2. Check that you have the correct API key for that provider
  3. Ensure the model name matches exactly (case-sensitive)

Rate Limiting

If you're hitting rate limits:

  1. Implement retry logic in your flows
  2. Use the delay action between requests
  3. Consider using different models or providers
  4. Check your provider's rate limits

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT License - see LICENSE file for details

Resources

Support

For issues and questions: