npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@capgo/capacitor-llm

v7.2.15

Published

Adds support for LLM locally runned for Capacitor

Readme

@capgo/capacitor-llm

Adds support for LLM locally run for Capacitor

It uses Apple Intelligence (default) or MediaPipe custom models on iOS, and MediaPipe's tasks-genai on Android

Documentation

The most complete doc is available here: https://capgo.app/docs/plugins/llm/

Installation

npm install @capgo/capacitor-llm
npx cap sync

iOS Additional Setup for Custom Models

If you want to use custom models on iOS (not just Apple Intelligence), you need to install MediaPipe dependencies.

Using CocoaPods: The MediaPipe dependencies are already configured in the podspec. Make sure to run pod install after adding the plugin.

Note about Static Framework Warning: When running pod install, you may see a warning about transitive dependencies with statically linked binaries. To fix this, update your Podfile:

# Change this:
use_frameworks!

# To this:
use_frameworks! :linkage => :static

# And add this to your post_install hook:
post_install do |installer|
  assertDeploymentTarget(installer)
  
  # Fix for static framework dependencies
  installer.pods_project.targets.each do |target|
    target.build_configurations.each do |config|
      config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
    end
    
    # Specifically for MediaPipe pods
    if target.name.include?('MediaPipeTasksGenAI')
      target.build_configurations.each do |config|
        config.build_settings['ENABLE_BITCODE'] = 'NO'
      end
    end
  end
end

Using Swift Package Manager:

MediaPipe GenAI does not officially support SPM yet (see MediaPipe issue #5464). However, you can use the community package SwiftTasksGenAI by adding it manually in Xcode:

To add MediaPipe GenAI via SPM:

  1. Open your app project in Xcode
  2. Go to File > Add Package Dependencies...
  3. Enter the URL: https://github.com/paescebu/SwiftTasksGenAI.git
  4. Select your desired version (e.g., 0.10.24)
  5. Add the SwiftTasksGenAI product to your app target

Note: SwiftTasksGenAI uses unsafe build flags, which means it cannot be added directly in Package.swift files, but works fine when added through Xcode's UI. This is the same approach used by SwiftTasksVision.

Alternatively: Use CocoaPods for a more traditional setup, or use Apple Intelligence (iOS 18.2+) which works with both SPM and CocoaPods.

Adding a Model to Your App

iOS

On iOS, you have two options:

  1. Apple Intelligence (Default): Uses the built-in system model (requires iOS 26.0+) - Recommended
  2. Custom Models: Use your own models via MediaPipe (requires CocoaPods) - Note: Some model formats may have compatibility issues

Using Apple Intelligence (Default)

No additional setup needed. The plugin will automatically use Apple Intelligence on supported devices (iOS 26.0+).

Using Custom Models on iOS via MediaPipe

⚠️ IMPORTANT: While MediaPipe documentation states Gemma-2 2B works on all platforms, iOS implementation has compatibility issues with .task format models, often resulting in (prefill_input_names.size() % 2)==(0) errors.

Model Compatibility Guide

Android (Working ✅)

  • Model: Gemma-3 models (270M, 1B, etc.)
  • Format: .task + .litertlm files
  • Where to download:
    • Kaggle Gemma models - "LiteRT (formerly TFLite)" tab
    • Example: gemma-3-270m-it-int8.task + gemma-3-270m-it-int8.litertlm

iOS (Limited Support ⚠️)

  • Recommended: Use Apple Intelligence (built-in, no download needed)
  • Alternative (Experimental):
    • Model: Gemma-2 2B (documented as compatible but may still fail)
    • Format: .task files (.bin format preferred but not available)
    • Where to download:
    • Note: Even with Gemma-2 2B, you may still encounter errors. Apple Intelligence is more reliable.

Web

  • Model: Gemma-3 models
  • Format: .task files
  • Where to download: Same as Android

Download Instructions

  1. For Android:

    • Go to Kaggle Gemma models
    • Click "LiteRT (formerly TFLite)" tab
    • Download both .task and .litertlm files
    • Place in android/app/src/main/assets/
  2. For iOS (if not using Apple Intelligence):

    • Visit Hugging Face MediaPipe collection
    • Find Gemma-2 2B models
    • Download .task files (and .litertlm if available)
    • Add to Xcode project in "Copy Bundle Resources"
    • Note: Success is not guaranteed due to format compatibility issues
  3. Set the model path:

import { CapgoLLM } from '@capgo/capacitor-llm';

// iOS - Use Apple Intelligence (default)
await CapgoLLM.setModel({ path: 'Apple Intelligence' });

// iOS - Use MediaPipe model (experimental)
await CapgoLLM.setModel({ 
  path: 'Gemma2-2B-IT_multi-prefill-seq_q8_ekv1280',
  modelType: 'task',
  maxTokens: 1280
});

// Now you can create chats and send messages
const chat = await CapgoLLM.createChat();

Android

For Android, you need to include a compatible LLM model in your app. The plugin uses MediaPipe's tasks-genai, which supports various model formats.

When Gemini mini will be out of close alpha we might add it as default like we do Apple intelligence default https://developer.android.com/ai/gemini-nano/experimental

Important: Your app's minSdkVersion must be set to 24 or higher. Update your android/variables.gradle file:

ext {
    minSdkVersion = 24
    // ... other settings
}
  1. Download a compatible model:

    For Android: Gemma 3 Models (Working ✅)

    • Recommended: Gemma 3 270M - Smallest, most efficient (~240-400MB)
    • Text-only model optimized for on-device inference
    • Download from Kaggle → "LiteRT (formerly TFLite)" tab
    • Get both .task and .litertlm files

    For iOS: Limited Options (⚠️)

    • Option 1 (Recommended): Use Apple Intelligence - no download needed
    • Option 2 (Experimental): Try Gemma-2 2B from Hugging Face
      • Download Gemma2-2B-IT_*.task files
      • May still encounter compatibility errors

    Available Model Sizes (from Google AI):

    • gemma-3-270m - Most portable, text-only (~240-400MB) - Android only
    • gemma-3-1b - Larger text-only (~892MB-1.5GB) - Android only
    • gemma-2-2b - Cross-platform compatible (~1-2GB) - iOS experimental
    • Larger models available but not recommended for mobile

    For automated download, see the script in the example app section below.

  2. Add the model to your app:

    Option A - Bundle in APK (for smaller models):

    • Place the model files in your app's android/app/src/main/assets/ directory
    • The /android_asset/ prefix is used to reference files from the assets folder
    • Note: This approach may increase APK size significantly

    Option B - Download at runtime (recommended for production):

    • Host the model files on a server
    • Download them to your app's files directory at runtime
    • This keeps your APK size small
  3. Set the model path:

import { CapgoLLM } from '@capgo/capacitor-llm';

// If model is in assets folder (recommended) - point to the .task file
await CapgoLLM.setModelPath({ path: '/android_asset/gemma-3-270m-it-int8.task' });

// If model is in app's files directory
await CapgoLLM.setModelPath({ path: '/data/data/com.yourapp/files/gemma-3-270m-it-int8.task' });

// Example: Download model at runtime (recommended for production)
async function downloadModel() {
  // Add progress listener
  await CapgoLLM.addListener('downloadProgress', (event) => {
    console.log(`Download progress: ${event.progress}%`);
  });

  // Download the model
  const result = await CapgoLLM.downloadModel({
    url: 'https://your-server.com/models/gemma-3-270m-it-int8.task',
    companionUrl: 'https://your-server.com/models/gemma-3-270m-it-int8.litertlm', // Android only
    filename: 'gemma-3-270m-it-int8.task' // Optional, defaults to filename from URL
  });

  console.log('Model downloaded to:', result.path);
  if (result.companionPath) {
    console.log('Companion file downloaded to:', result.companionPath);
  }

  // Now set the model path
  await CapgoLLM.setModelPath({ path: result.path });
}

// Now you can create chats and send messages
const chat = await CapgoLLM.createChat();

Usage Example

import { CapgoLLM } from '@capgo/capacitor-llm';

// Check if LLM is ready
const { readiness } = await CapgoLLM.getReadiness();
console.log('LLM readiness:', readiness);

// Set a custom model (optional - uses system default if not called)
// iOS: Use a GGUF model from bundle or path
// Android: Use a MediaPipe model from assets or files
await CapgoLLM.setModelPath({ 
  path: Platform.isIOS ? 'gemma-3-270m.gguf' : '/android_asset/gemma-3-270m-it-int8.task' 
});

// Create a chat session
const { id: chatId } = await CapgoLLM.createChat();

// Listen for AI responses
CapgoLLM.addListener('textFromAi', (event) => {
  console.log('AI:', event.text);
});

// Listen for completion
CapgoLLM.addListener('aiFinished', (event) => {
  console.log('AI finished responding to chat:', event.chatId);
});

// Send a message
await CapgoLLM.sendMessage({
  chatId,
  message: 'Hello! How are you today?'
});

API

LLM Plugin interface for interacting with on-device language models

createChat()

createChat() => Promise<{ id: string; instructions?: string; }>

Creates a new chat session

Returns: Promise<{ id: string; instructions?: string; }>


sendMessage(...)

sendMessage(options: { chatId: string; message: string; }) => Promise<void>

Sends a message to the AI in a specific chat session

| Param | Type | Description | | ------------- | ------------------------------------------------- | --------------------------------- | | options | { chatId: string; message: string; } | - The chat id and message to send |


getReadiness()

getReadiness() => Promise<{ readiness: string; }>

Gets the readiness status of the LLM

Returns: Promise<{ readiness: string; }>


setModel(...)

setModel(options: ModelOptions) => Promise<void>

Sets the model configuration

  • iOS: Use "Apple Intelligence" as path for system model, or provide path to MediaPipe model
  • Android: Path to a MediaPipe model file (in assets or files directory)

| Param | Type | Description | | ------------- | ----------------------------------------------------- | ------------------------- | | options | ModelOptions | - The model configuration |


downloadModel(...)

downloadModel(options: DownloadModelOptions) => Promise<DownloadModelResult>

Downloads a model from a URL and saves it to the appropriate location

  • iOS: Downloads to the app's documents directory
  • Android: Downloads to the app's files directory

| Param | Type | Description | | ------------- | --------------------------------------------------------------------- | ---------------------------- | | options | DownloadModelOptions | - The download configuration |

Returns: Promise<DownloadModelResult>


addListener('textFromAi', ...)

addListener(eventName: 'textFromAi', listenerFunc: (event: TextFromAiEvent) => void) => Promise<{ remove: () => Promise<void>; }>

Adds a listener for text received from AI

| Param | Type | Description | | ------------------ | ------------------------------------------------------------------------------- | ----------------------------------- | | eventName | 'textFromAi' | - Event name 'textFromAi' | | listenerFunc | (event: TextFromAiEvent) => void | - Callback function for text events |

Returns: Promise<{ remove: () => Promise<void>; }>


addListener('aiFinished', ...)

addListener(eventName: 'aiFinished', listenerFunc: (event: AiFinishedEvent) => void) => Promise<{ remove: () => Promise<void>; }>

Adds a listener for AI completion events

| Param | Type | Description | | ------------------ | ------------------------------------------------------------------------------- | ------------------------------------- | | eventName | 'aiFinished' | - Event name 'aiFinished' | | listenerFunc | (event: AiFinishedEvent) => void | - Callback function for finish events |

Returns: Promise<{ remove: () => Promise<void>; }>


addListener('downloadProgress', ...)

addListener(eventName: 'downloadProgress', listenerFunc: (event: DownloadProgressEvent) => void) => Promise<{ remove: () => Promise<void>; }>

Adds a listener for model download progress events

| Param | Type | Description | | ------------------ | ------------------------------------------------------------------------------------------- | --------------------------------------- | | eventName | 'downloadProgress' | - Event name 'downloadProgress' | | listenerFunc | (event: DownloadProgressEvent) => void | - Callback function for progress events |

Returns: Promise<{ remove: () => Promise<void>; }>


addListener('readinessChange', ...)

addListener(eventName: 'readinessChange', listenerFunc: (event: ReadinessChangeEvent) => void) => Promise<{ remove: () => Promise<void>; }>

Adds a listener for readiness status changes

| Param | Type | Description | | ------------------ | ----------------------------------------------------------------------------------------- | ---------------------------------------- | | eventName | 'readinessChange' | - Event name 'readinessChange' | | listenerFunc | (event: ReadinessChangeEvent) => void | - Callback function for readiness events |

Returns: Promise<{ remove: () => Promise<void>; }>


getPluginVersion()

getPluginVersion() => Promise<{ version: string; }>

Get the native Capacitor plugin version.

Returns: Promise<{ version: string; }>

Since: 1.0.0


Interfaces

ModelOptions

Model configuration options

| Prop | Type | Description | | ----------------- | ------------------- | ---------------------------------------------------------------------------------------------------------- | | path | string | Model path or "Apple Intelligence" for iOS system model | | modelType | string | Model file type/extension (e.g., "task", "bin", "litertlm"). If not provided, will be extracted from path. | | maxTokens | number | Maximum number of tokens the model handles | | topk | number | Number of tokens the model considers at each step | | temperature | number | Amount of randomness in generation (0.0-1.0) | | randomSeed | number | Random seed for generation |

DownloadModelResult

Result of model download

| Prop | Type | Description | | ------------------- | ------------------- | ------------------------------------------------------- | | path | string | Path where the model was saved | | companionPath | string | Path where the companion file was saved (if applicable) |

DownloadModelOptions

Options for downloading a model

| Prop | Type | Description | | ------------------ | ------------------- | ------------------------------------------------------------- | | url | string | URL of the model file to download | | companionUrl | string | Optional: URL of companion file (e.g., .litertlm for Android) | | filename | string | Optional: Custom filename (defaults to filename from URL) |

TextFromAiEvent

Event data for text received from AI

| Prop | Type | Description | | ------------- | -------------------- | -------------------------------------------------------------------------- | | text | string | The text content from AI - this is an incremental chunk, not the full text | | chatId | string | The chat session ID | | isChunk | boolean | Whether this is a complete chunk (true) or partial streaming data (false) |

AiFinishedEvent

Event data for AI completion

| Prop | Type | Description | | ------------ | ------------------- | --------------------------------- | | chatId | string | The chat session ID that finished |

DownloadProgressEvent

Event data for download progress

| Prop | Type | Description | | --------------------- | ------------------- | ---------------------------------------- | | progress | number | Percentage of download completed (0-100) | | totalBytes | number | Total bytes to download | | downloadedBytes | number | Bytes downloaded so far |

ReadinessChangeEvent

Event data for readiness status changes

| Prop | Type | Description | | --------------- | ------------------- | -------------------- | | readiness | string | The readiness status |

Example App Model Setup

The example app demonstrates how to use custom models with the Capacitor LLM plugin.

Downloading Models

Since AI models require license acceptance, you need to download them manually:

Model Setup by Platform

Android Setup (Gemma 3 270M) ✅

  1. Create Kaggle account and accept license:

  2. Download the model:

    • Click "LiteRT (formerly TFLite)" tab
    • Download gemma-3-270m-it-int8 (get BOTH files):
      • gemma-3-270m-it-int8.task
      • gemma-3-270m-it-int8.litertlm
  3. Place in Android app:

    • Copy BOTH files to: example-app/android/app/src/main/assets/
    • In code, reference with /android_asset/ prefix

iOS Setup ⚠️

Option 1 (Recommended): Use Apple Intelligence

  • No model download needed
  • Works out of the box on iOS 26.0+

Option 2 (Experimental): Try Gemma-2 2B

  1. Visit Hugging Face MediaPipe models
  2. Download Gemma-2 2B files (e.g., Gemma2-2B-IT_multi-prefill-seq_q8_ekv1280.task)
  3. Add to Xcode project in "Copy Bundle Resources"
  4. Note: May still encounter errors - Apple Intelligence is more reliable

Update your code to use the model:

// Android - Gemma 3 270M
await CapgoLLM.setModel({ 
  path: '/android_asset/gemma-3-270m-it-int8.task',
  maxTokens: 2048,
  topk: 40,
  temperature: 0.8
});

// iOS Option 1 - Apple Intelligence (Recommended)
await CapgoLLM.setModel({ 
  path: 'Apple Intelligence' 
});

// iOS Option 2 - Gemma-2 2B (Experimental)
await CapgoLLM.setModel({ 
  path: 'Gemma2-2B-IT_multi-prefill-seq_q8_ekv1280',
  modelType: 'task',
  maxTokens: 1280,
  topk: 40,
  temperature: 0.8
});

Model Selection Guide

For Android: Gemma 3 270M is recommended because:

  • Smallest size (~240-400MB)
  • Text-only (perfect for chat)
  • Optimized for mobile devices
  • Works reliably with MediaPipe

For iOS: Apple Intelligence is recommended because:

  • No download required
  • Native iOS integration
  • Better performance
  • No compatibility issues

Known Issues

  • iOS MediaPipe Compatibility: MediaPipe iOS SDK has issues with .task format models

    • Symptom: (prefill_input_names.size() % 2)==(0) errors
    • Solution: Use Apple Intelligence (built-in) instead of custom models
    • Alternative: Try Gemma-2 2B models (experimental, may still fail)
  • Platform Requirements:

    • iOS: Apple Intelligence requires iOS 26.0+
    • Android: Minimum SDK 24
  • Performance Considerations:

    • Model files are large (300MB-2GB)
    • Initial download takes time
    • Some devices may have memory limitations