@nexaleaf/react-ai-hooks
v1.1.0
Published
React hooks for seamless LLM integration with streaming, conversation management, and multi-provider support
Downloads
15
Maintainers
Readme
@nexaleaf/react-ai-hooks
React hooks for seamless LLM integration with streaming, conversation management, and multi-provider support.
Features
✅ Provider Support: Currently supports OpenAI and Anthropic with unified API
✅ Streaming built-in: Real-time token-by-token updates for ChatGPT-like experiences
✅ TypeScript-first: Full TypeScript support with comprehensive type definitions
✅ SSR & Edge-ready: Works with Next.js, Remix, and other modern frameworks
✅ Lightweight: Optimized bundle size (52.8kB unpacked, 10.8kB package)
✅ Enterprise-ready: Built-in retry logic, circuit breakers, rate limiting, and error handling
✅ Load Balancing: Fallback providers and load balancing support
Installation
npm install @nexaleaf/react-ai-hooks
# or
yarn add @nexaleaf/react-ai-hooks
# or
pnpm add @nexaleaf/react-ai-hooksQuick Start
import { useLLM, useChatCompletion, useStreamingResponse } from '@nexaleaf/react-ai-hooks';
// Simple LLM generation
function TextGenerator() {
const { generate, loading, result, error } = useLLM({
provider: 'openai',
apiKey: process.env.REACT_APP_OPENAI_API_KEY,
});
return (
<div>
<button onClick={() => generate('Write a haiku about coding')}>
Generate
</button>
{loading && <p>Loading...</p>}
{result && <p>{result}</p>}
{error && <p>Error: {error.message}</p>}
</div>
);
}
// Chat conversation
function ChatInterface() {
const { messages, sendMessage, loading } = useChatCompletion({
provider: 'openai',
apiKey: process.env.REACT_APP_OPENAI_API_KEY,
});
const [input, setInput] = useState('');
const handleSend = () => {
sendMessage(input);
setInput('');
};
return (
<div>
<div>
{messages.map((msg, i) => (
<div key={i}>
<strong>{msg.role}:</strong> {msg.content}
</div>
))}
</div>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && handleSend()}
/>
<button onClick={handleSend} disabled={loading}>
Send
</button>
</div>
);
}
// Streaming response
function StreamingChat() {
const { streamText, currentText, isStreaming } = useStreamingResponse({
provider: 'openai',
apiKey: process.env.REACT_APP_OPENAI_API_KEY,
});
return (
<div>
<button onClick={() => streamText('Tell me a story')}>
Start Stream
</button>
<div>
{currentText}
{isStreaming && <span className="cursor">|</span>}
</div>
</div>
);
}Available Hooks
useLLM
General-purpose hook for single prompt-response interactions.
const { generate, loading, result, error } = useLLM({
provider: 'openai',
apiKey: 'your-api-key',
model: 'gpt-4',
temperature: 0.7,
});useChatCompletion
Handles chat conversations with message history.
const { messages, sendMessage, loading, clearMessages } = useChatCompletion({
provider: 'anthropic',
apiKey: 'your-api-key',
model: 'claude-3-sonnet-20240229',
});useStreamingResponse
Real-time streaming for token-by-token updates.
const { streamText, currentText, isStreaming, stop } = useStreamingResponse({
provider: 'openai',
apiKey: 'your-api-key',
});useEmbeddings
Generate text embeddings for search and RAG applications.
const { embed, vector, loading } = useEmbeddings({
provider: 'openai',
apiKey: 'your-api-key',
});Currently Supported Providers
- OpenAI ✅ - GPT-4o, GPT-4o-mini, GPT-4, GPT-3.5-turbo, embeddings
- Anthropic ✅ - Claude 3.5 Sonnet, Claude 3 (Opus, Sonnet, Haiku)
Coming Soon
- Google 🚧 - Gemini Pro, Gemini Pro Vision
- Ollama 🚧 - Local models (Llama, Mistral, etc.)
- Custom 🚧 - Bring your own API endpoint
Configuration
Environment Variables
Create a .env.local file:
# OpenAI
REACT_APP_OPENAI_API_KEY=your_openai_api_key
# Anthropic
REACT_APP_ANTHROPIC_API_KEY=your_anthropic_api_key
# Google
REACT_APP_GOOGLE_API_KEY=your_google_api_keyProvider Configuration
const config = {
provider: 'openai',
apiKey: process.env.REACT_APP_OPENAI_API_KEY,
model: 'gpt-4',
temperature: 0.7,
maxTokens: 1000,
baseURL: 'https://api.openai.com/v1', // Custom endpoint
organization: 'your-org-id', // OpenAI specific
};Advanced Features
Multi-Provider Support & Load Balancing
import { useMultiProvider, createProvider } from '@nexaleaf/react-ai-hooks';
const { getProvider } = useMultiProvider();
// Get different providers as needed
const openaiProvider = getProvider({
provider: 'openai',
apiKey: process.env.REACT_APP_OPENAI_API_KEY,
});
const anthropicProvider = getProvider({
provider: 'anthropic',
apiKey: process.env.REACT_APP_ANTHROPIC_API_KEY,
});
// Or create a load-balanced setup with fallbacks
import { createLoadBalancedProvider } from '@nexaleaf/react-ai-hooks';
const provider = createLoadBalancedProvider(
{ provider: 'openai', apiKey: process.env.OPENAI_KEY },
[{ provider: 'anthropic', apiKey: process.env.ANTHROPIC_KEY }]
);Error Handling
const { generate, error } = useLLM({
provider: 'openai',
apiKey: 'your-api-key',
onError: (error) => {
console.error('LLM Error:', error);
// Custom error handling
},
});Retry Configuration
const config = {
provider: 'openai',
apiKey: 'your-api-key',
retryAttempts: 3,
retryDelay: 1000,
circuitBreakerThreshold: 5,
};TypeScript Support
Full TypeScript support with comprehensive type definitions:
import type {
LLMResponse,
StreamingChunk,
BaseMessage,
OpenAIConfig
} from '@nexaleaf/react-ai-hooks';
interface CustomResponse extends LLMResponse {
customField: string;
}Contributing
We welcome contributions! Please see our Contributing Guide for details.
License
MIT © NexaLeaf
Support
Current Integrations
This library is actively used in production by:
- @nexaleaf/pr-reviewer - AI-powered code review bot for GitHub Actions
Roadmap
- [ ] Google Gemini provider implementation
- [ ] Ollama provider for local models
- [ ] Custom provider support
- [ ] Function calling / tools support
- [ ] Built-in RAG utilities
- [ ] React Native support
- [ ] More provider integrations (Cohere, Together AI)
