@micrantha/react-native-amaryllis
v0.1.6
Published
A generative AI module for native mobile
Readme
react-native-amaryllis

Amaryllis Hippeastrum: Symbolizes hope and emergence, blooming even in tough conditions.
A modern AI module for native mobile apps in React Native, supporting multimodal inference and streaming results.
🚀 Installation
npm install react-native-amaryllis
# or
yarn add react-native-amaryllis
# or
pnpm add react-native-amaryllis📦 Features
- Native LLM engine for Android & iOS
- Multimodal support (text + images)
- Streaming inference with hooks & observables
- Easy integration with React Native context/provider
- LoRA customization (GPU only)
🛠️ Usage
Provider Setup
Wrap your application with LLMProvider and provide the necessary model paths. The models should be downloaded to the device.
import { LLMProvider } from 'react-native-amaryllis';
<LLMProvider
config={{
modelPath: 'gemma3-1b-it-int4.task',
visionEncoderPath: 'mobilenet_v3_small.tflite',
visionAdapterPath: 'mobilenet_v3_small.tflite',
maxTopK: 32,
maxNumImages: 2,
maxTokens: 512,
}}
>
{/* Your app components */}
</LLMProvider>You can access the LLM controller with a useLLMContext hook. See Core API for details on the controller API.
const {
config, // original config param
controller, // native controller
error, // any error
isReady, // is controller initialized
} = useLLMContext();Inference Hook
Use the useInference hook to access the LLM's capabilities.
import { useInferenceAsync } from 'react-native-amaryllis';
import { useCallback, useState } from 'react';
import { View, TextInput, Button, Text } from 'react-native';
const LLMPrompt = () => {
const [prompt, setPrompt] = useState('');
const [results, setResults] = useState([]);
const [images, setImages] = useState([]);
const [error, setError] = useState(undefined);
const [isBusy, setIsBusy] = useState(false);
const props = useMemo(() => ({
onGenerate: () => {
setError(undefined);
setIsBusy(true);
},
onResult: (result, isFinal) => {
setResults((prev) => [...prev, result]);
if (isFinal) {
setIsBusy(false);
}
},
onError: (err) => setError(err)
}), [setError, setIsBusy, setResults])
const generate = useInferenceAsync(props);
const infer = useCallback(async () => {
await generate({ prompt, images });
}, [prompt, generate, images]);
return (
<View>
<TextInput
value={prompt}
onChangeText={setPrompt}
placeholder="Enter prompt..."
/>
<Button title="Generate" onPress={infer} />
<Text>
{error ? error.message : results.join('\n')}
</Text>
{/* image controls */}
</View>
);
};Substitute the useInferenceAsync hook to stream the results.
Core API
For more advanced use cases, you can use the core Amaryllis API directly.
This is the same controller passed from useLLMContext.
Initialization
import { Amaryllis } from 'react-native-amaryllis';
const amaryllis = new Amaryllis();
await amaryllis.init({
modelPath: '/path/to/your/model.task',
visionEncoderPath: '/path/to/vision/encoder.tflite',
visionAdapterPath: '/path/to/vision/adapter.tflite',
});A session is required for working with images.
await amaryllis.newSession({
topK: 40, // only top results
topP: 0.95, // only top percentage match
temperature: 0.8,
randomSeed: 0, // for reproducing
loraPath: "", // LoRA customization (GPU only)
enableVisionModality: true // for vision
})Generate Response
const result = await amaryllis.generate({
prompt: 'Your prompt here',
images: ['file:///path/to/image.png'],
});Streaming Response
amaryllis.generateAsync(
{
prompt: 'Your prompt here',
images: ['file:///path/to/image.png'],
},
{
onEvent: (event) => {
if (event.type === 'partial') {
console.log('Partial result:', event.text);
return;
}
if (event.type === 'final') {
console.log('Final result:', event.text);
return;
}
console.error('Error:', event.error);
},
}
);Note: onPartialResult, onFinalResult, and onError are deprecated and will be removed in a future release. Use onEvent instead.
onEvent receives a discriminated union:
type LlmEvent =
| { type: 'partial'; text: string }
| { type: 'final'; text: string }
| { type: 'error'; error: Error };You can cancel an async generate if needed.
amaryllis.cancelAsync();🧠 Context Engine
The Context Engine is an interface-first layer for memory and retrieval. You bring your own ContextStore (SQLite, files, or custom DB) while the engine handles validation, policy bounds, and optional scoring.
Context APIs are also available via the react-native-amaryllis/context subpath.
import { ContextEngine } from 'react-native-amaryllis/context';
const engine = new ContextEngine({
store: myStore,
policy: { maxItems: 1000, defaultTtlSeconds: 60 * 60 * 24 },
});
await engine.add([{ id: 'mem-1', text: 'Quest started', createdAt: Date.now() }]);
const results = await engine.search({ text: 'quest', limit: 5 });See docs/context-engine.md for details.
📚 Documentation
🤝 Contributing
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
📄 License
This project is MIT licensed.
