@enkaliprime/nextjs-chat-sdk
v1.1.0
Published
Official Next.js SDK for EnkaliPrime Chat API - Direct API integration for Next.js and React web applications
Downloads
119
Readme
💬 @enkaliprime/nextjs-chat-sdk
Official Next.js SDK for EnkaliPrime Chat API
Powered by EnkaliBridge for secure, unified API access with custom UI/UX design
✨ Features
- ✅ RAG-Enabled AI - Retrieval-Augmented Generation with Knowledge Base support
- ✅ Real-time Messaging - Instant message delivery with AI responses
- ✅ Streaming Support - Optional real-time streaming responses
- ✅ Conversation History - Automatic context management (last 10 messages)
- ✅ Session Management - Persistent chat sessions
- ✅ TypeScript Support - Full type safety
- ✅ Next.js Ready - Works with App Router and Pages Router
- ✅ Server & Client Side - Works in both API routes and client components
📦 Installation
npm install @enkaliprime/nextjs-chat-sdkNo additional dependencies required! The SDK uses standard fetch which is available in Next.js out of the box.
🚀 Quick Start
1. Environment Variables
Create a .env.local file in your Next.js project root:
NEXT_PUBLIC_ENKALI_BRIDGE_API_KEY=ek_bridge_1763490675941_km8imacsz5
NEXT_PUBLIC_ENKALI_BRIDGE_BASE_URL=https://sdk.enkaliprime.com2. Basic Usage (Client Component)
Note: When using
useEnkaliChatin Next.js App Router, add'use client'at the top of your component file.
'use client';
import { useEffect, useState } from 'react';
import { useEnkaliChat } from '@enkaliprime/nextjs-chat-sdk';
export default function ChatPage() {
const [inputText, setInputText] = useState('');
const {
messages,
isLoading,
error,
isTyping,
sendMessage,
createSession,
clearError,
} = useEnkaliChat(
{
unifiedApiKey: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_API_KEY!,
baseUrl: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_BASE_URL!,
userId: 'user_123',
},
{
agentName: 'Sarah',
enableStreaming: false,
onError: (error) => console.error('Chat error:', error),
}
);
useEffect(() => {
createSession();
}, []);
const handleSend = async () => {
if (inputText.trim() && !isLoading) {
await sendMessage(inputText.trim());
setInputText('');
}
};
return (
<div className="flex flex-col h-screen max-w-4xl mx-auto p-4">
<div className="flex-1 overflow-y-auto mb-4 space-y-4">
{messages.map((message) => (
<div
key={message.id}
className={`flex ${message.isUser ? 'justify-end' : 'justify-start'}`}
>
<div
className={`max-w-xs lg:max-w-md px-4 py-2 rounded-lg ${
message.isUser
? 'bg-blue-500 text-white'
: 'bg-gray-200 text-gray-800'
}`}
>
<p>{message.text}</p>
</div>
</div>
))}
{isTyping && (
<div className="flex justify-start">
<div className="bg-gray-200 text-gray-800 px-4 py-2 rounded-lg">
<p>Agent is typing...</p>
</div>
</div>
)}
</div>
<div className="flex gap-2">
<input
type="text"
value={inputText}
onChange={(e) => setInputText(e.target.value)}
onKeyPress={(e) => e.key === 'Enter' && handleSend()}
placeholder="Type your message..."
className="flex-1 px-4 py-2 border border-gray-300 rounded-lg"
disabled={isLoading}
/>
<button
onClick={handleSend}
disabled={!inputText.trim() || isLoading}
className="px-6 py-2 bg-blue-500 text-white rounded-lg hover:bg-blue-600 disabled:bg-gray-300"
>
Send
</button>
</div>
</div>
);
}3. Server-Side Usage (API Route)
import { NextRequest, NextResponse } from 'next/server';
import { EnkaliPrimeClient } from '@enkaliprime/nextjs-chat-sdk';
export async function POST(request: NextRequest) {
try {
const { message, sessionId, context } = await request.json();
if (!message || !sessionId) {
return NextResponse.json(
{ error: 'Message and sessionId are required' },
{ status: 400 }
);
}
const client = new EnkaliPrimeClient({
unifiedApiKey: process.env.ENKALI_BRIDGE_API_KEY!,
baseUrl: process.env.ENKALI_BRIDGE_BASE_URL!,
});
const response = await client.sendMessage(
message,
sessionId,
context || [],
false
);
return NextResponse.json({ message: response });
} catch (error) {
console.error('Chat API error:', error);
return NextResponse.json(
{
error: error instanceof Error ? error.message : 'Failed to send message',
},
{ status: 500 }
);
}
}📚 API Reference
useEnkaliChat Hook
React hook for managing chat state in client components.
const {
messages, // Array of chat messages
isLoading, // Loading state
error, // Error message
isTyping, // Typing indicator state
session, // Current chat session
sendMessage, // Function to send messages
createSession, // Function to create chat session
endSession, // Function to end chat session
clearHistory, // Function to clear conversation history
clearError, // Function to clear errors
} = useEnkaliChat(config, options);Multi-model routing with llmCouncil
To enable “LLM council” voting (useful for Ollama + cloud models), pass llmCouncil in the second argument:
const chat = useEnkaliChat(
{
unifiedApiKey: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_API_KEY!,
baseUrl: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_BASE_URL!,
userId: 'user_123',
},
{
llmCouncil: {
strategy: 'vote',
models: [
{ id: 'c1', provider: 'ollama', model: 'llama3' },
{ id: 'c2', provider: 'cloud', model: 'gpt-4.1-mini' },
],
judge: { provider: 'cloud', model: 'gpt-4.1-mini', temperature: 0 },
/**
* Optional: internal aggregation step.
* When enabled, the SDK runs an additional call that synthesizes
* a single unified output using ALL candidate model responses.
*/
aggregate: {
provider: 'cloud',
model: 'gpt-4.1-mini',
temperature: 0,
promptTemplate:
'Synthesize a single unified answer for: "{{message}}". Using these candidates:\n\n{{candidates}}\n\nReturn ONLY the final merged answer in Markdown.',
},
subAgents: [
{
name: 'planner',
prompt: 'Given the user message: "{{message}}", draft a high-quality answer plan based on: "{{current}}".',
modelId: 'c1',
},
],
},
}
);LLM Council: Nuclear Research multi-stage pipeline (end-to-end example)
You can build a multi-model “research lab” app by combining:
llmCouncilfor parallel role-based research + optional internal aggregationsendMessagefor sequential orchestration steps (planning, synthesis, visualization, expansion, export)
Architecture diagram
flowchart LR
user[User Research Request] --> planner[Planner call (single model) + tasks list]
planner --> council3[LLM Council (parallel role models via Ollama)]
council3 --> aggregate4[Internal aggregation (unified report)]
aggregate4 --> visualize5[Visualization spec + chart/table data extraction]
visualize5 --> expand6[Knowledge expansion loop (role re-expansion)]
expand6 --> assemble7[Final document assembly]
assemble7 --> export[Export PDF/DOCX/PPT]Step-by-step pipeline
Planner (Task Decomposition)
- Goal: break the request into research tasks and assign roles.
- Implementation: one
sendMessagecall. - Output (example): a JSON plan containing tasks such as fundamentals, environment, economics, policy, case studies.
Distributed research (Step 3)
- Goal: run multiple models in parallel, each acting as a role specialist.
- Implementation: one
llmCouncilcall. - Example roles:
ScientificResearch: deep technical facts + referencesEnvironmentalAnalysis: coastal ecosystems + risksDataAnalyst: numeric extraction + comparisonsPolicyAnalyst: governance + regulationsEnergySystems: feasibility inside coastal energy grids
- How to set it up:
- Put each role model into
llmCouncil.models[]. - Put role-specific instructions into your prompt (the prompt you send with
sendMessage). - Optionally use
subAgentsto do iterative “knowledge expansion” steps inside each candidate.
- Put each role model into
Reasoning Aggregation (Step 4)
- Goal: merge all role outputs into one unified, conflict-resolved research document.
- Implementation: enabled by
llmCouncil.aggregate. - The SDK passes all candidate texts into an aggregation call and returns a single merged
message.
Visualization Engine (Step 5)
- Goal: convert numeric insights into chart specs + chart-ready tables.
- Implementation: another
sendMessagecall after the unified report. - Prompt contract (recommended):
- Return a JSON object containing:
charts[](type, title, datasets)tables[](rows, columns)keyMetrics[](for quick display)
- Return a JSON object containing:
Knowledge Expansion Loop (Step 6)
- Goal: ask each role to deepen its section using the unified report as context.
- Implementation options:
- Option A: call
llmCouncilagain with the same roles and prompt them:- “Expand your previous section using the unified report below…”
- Option B: use
subAgentsin the same council call if you want fewer orchestration steps.
- Option A: call
Document Assembly (Step 7)
- Goal: compile everything into a professional document with headings and references.
- Implementation: final
sendMessagecall requesting a structured output format:- Markdown with required headings and a “References” section.
- If your export tool needs JSON, request JSON instead.
Export
- Goal: export your compiled report and charts.
- Implementation: do the export in your app (server-side) using libraries like:
- PDF:
pdf-lib,puppeteer, or a document renderer - DOCX:
docx - Slides:
pptxgenjs
- PDF:
Example orchestration prompt (role-based research)
Send a single sendMessage with a prompt like:
You are participating in a multi-model research council.
Your assigned role is determined by the model/provider you are running as.
USER RESEARCH REQUEST:
{{message}}
For your role:
- produce a structured section with headings
- include citations/references when available in the content you generate
- extract any numbers into a small “Key Data” subsection (for later charts)
Return your section as Markdown only.Then rely on llmCouncil.aggregate to synthesize all sections into one unified report.
EnkaliPrimeClient Class
Main client class for interacting with EnkaliBridge API.
const client = new EnkaliPrimeClient({
unifiedApiKey: 'ek_bridge_...',
baseUrl: 'https://sdk.enkaliprime.com',
userId: 'user_123', // Optional
});
// Send a message
const response = await client.sendMessage(
'Hello!',
'session_123',
[], // context (optional)
false // stream (optional)
);📖 Full Documentation
For complete documentation, examples, and best practices, see:
- NEXTJS.md - Complete Next.js integration guide
- USE_CASES.md - Real-world use cases, implementation patterns, and error handling
- WIDGET_USAGE.md - Complete guide to using ChatKit-style widgets in your application
🔑 Getting Your API Key
- Connect your application to a Widget in the EnkaliPrime dashboard
- Navigate to the EnkaliBridge API section
- Copy your unified API key (starts with
ek_bridge_)
💡 Best Practices
- ✅ Store API keys in environment variables
- ✅ Use
NEXT_PUBLIC_prefix for client-side access - ✅ Use server-side API routes for sensitive operations
- ✅ Handle errors gracefully
- ✅ Validate user input
📚 Additional Resources
- Documentation: api.enkaliprime.com/docs
- Support: [email protected]
- Website: api.enkaliprime.com
- GitHub: github.com/Auwanga/sdk/tree/main/nextjs-sdk
📄 License
MIT License - see LICENSE file for details.
Made with ❤️ by EnkaliPrime
