@subhajit-gorai/react-native-mediapipe-llm
v1.0.3
Published
React Native binding for Google AI Edge Gallery's MediaPipe on-device LLM inference engine
Downloads
19
Maintainers
Readme
React Native MediaPipe LLM Demo
This is a demonstration app showcasing the integration of Google's Gemma 3N model with React Native using the MediaPipe LLM framework.
Features
- 📱 Cross-platform chat interface (iOS/Android)
- 🤖 Gemma 3N integration with streaming responses
- 📂 File picker for model selection
- 💬 Real-time chat with AI assistant
- 🎨 Modern, responsive UI design
Getting Started
Prerequisites
Download the Gemma 3N Model
- Visit: https://huggingface.co/google/gemma-3n-E2B-it-litert-preview
- Download:
gemma-3n-E2B-it-int4.taskfile - Save it to your device (Documents, Downloads, etc.)
Development Environment
- Node.js 16+
- React Native development environment
- iOS Simulator or Android Emulator
- Expo CLI (if using Expo Go)
Installation
Install dependencies
npm installiOS Setup (if running on iOS)
cd ios && pod install && cd ..
Running the Demo
Using Expo Go
npm startThen scan the QR code with Expo Go app.
iOS Simulator
npm run iosAndroid Emulator
npm run androidHow to Use
Launch the App
- The app will start with the Welcome screen
Select Model File
- Tap "Select Model File"
- Navigate to where you saved the
gemma-3n-E2B-it-int4.taskfile - Select the file
Initialize Model
- Tap "Initialize Model"
- Wait for initialization to complete (may take a few moments)
Start Chatting
- Tap "Start Chatting →"
- Begin conversing with Gemma 3N!
App Structure
src/
├── hooks/
│ └── useLlmInference.ts # LLM integration hook
├── screens/
│ ├── WelcomeScreen.tsx # Model setup and initialization
│ └── ChatScreen.tsx # Chat interface
├── types/
│ └── index.ts # TypeScript definitions
└── components/ # Reusable UI componentsKey Components
WelcomeScreen
- Model file selection via document picker
- Model initialization with configuration
- Status tracking and user guidance
ChatScreen
- Real-time chat interface
- Streaming response display
- Message history management
- Keyboard handling
useLlmInference Hook
- Wraps MediaPipe LLM functionality
- Handles model initialization
- Manages response generation
- Provides loading states
Configuration
The demo uses these default LLM parameters:
- Max Tokens: 512
- Temperature: 0.8
- Top-K: 40
- Top-P: 0.9
These can be modified in WelcomeScreen.tsx.
Troubleshooting
Model Not Loading
- Ensure you downloaded the correct
.taskfile - Check file permissions
- Verify sufficient device storage
App Crashes
- Restart the development server
- Clear React Native cache:
npx react-native start --reset-cache - Reinstall dependencies
Performance Issues
- Close other apps to free memory
- Use a physical device for better performance
- Ensure the model file isn't corrupted
Technical Notes
- Model Size: The Gemma 3N model is approximately 2GB
- Memory Usage: Requires ~4GB RAM for optimal performance
- Inference Speed: Varies by device hardware capabilities
- Storage: Ensure 3GB+ free space for model and cache
Next Steps
This demo provides a foundation for:
- Production chat applications
- Custom model integrations
- Advanced LLM features
- Performance optimizations
Support
For issues specific to this demo, please check the main project README or create an issue in the repository.
Note: This is a demonstration app. For production use, implement proper error handling, security measures, and performance optimizations.
