npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@enkaliprime/nextjs-chat-sdk

v1.1.0

Published

Official Next.js SDK for EnkaliPrime Chat API - Direct API integration for Next.js and React web applications

Downloads

119

Readme

💬 @enkaliprime/nextjs-chat-sdk

Official Next.js SDK for EnkaliPrime Chat API

Powered by EnkaliBridge for secure, unified API access with custom UI/UX design

npm version License TypeScript


✨ Features

  • RAG-Enabled AI - Retrieval-Augmented Generation with Knowledge Base support
  • Real-time Messaging - Instant message delivery with AI responses
  • Streaming Support - Optional real-time streaming responses
  • Conversation History - Automatic context management (last 10 messages)
  • Session Management - Persistent chat sessions
  • TypeScript Support - Full type safety
  • Next.js Ready - Works with App Router and Pages Router
  • Server & Client Side - Works in both API routes and client components

📦 Installation

npm install @enkaliprime/nextjs-chat-sdk

No additional dependencies required! The SDK uses standard fetch which is available in Next.js out of the box.


🚀 Quick Start

1. Environment Variables

Create a .env.local file in your Next.js project root:

NEXT_PUBLIC_ENKALI_BRIDGE_API_KEY=ek_bridge_1763490675941_km8imacsz5
NEXT_PUBLIC_ENKALI_BRIDGE_BASE_URL=https://sdk.enkaliprime.com

2. Basic Usage (Client Component)

Note: When using useEnkaliChat in Next.js App Router, add 'use client' at the top of your component file.

'use client';

import { useEffect, useState } from 'react';
import { useEnkaliChat } from '@enkaliprime/nextjs-chat-sdk';

export default function ChatPage() {
  const [inputText, setInputText] = useState('');

  const {
    messages,
    isLoading,
    error,
    isTyping,
    sendMessage,
    createSession,
    clearError,
  } = useEnkaliChat(
    {
      unifiedApiKey: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_API_KEY!,
      baseUrl: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_BASE_URL!,
      userId: 'user_123',
    },
    {
      agentName: 'Sarah',
      enableStreaming: false,
      onError: (error) => console.error('Chat error:', error),
    }
  );

  useEffect(() => {
    createSession();
  }, []);

  const handleSend = async () => {
    if (inputText.trim() && !isLoading) {
      await sendMessage(inputText.trim());
      setInputText('');
    }
  };

  return (
    <div className="flex flex-col h-screen max-w-4xl mx-auto p-4">
      <div className="flex-1 overflow-y-auto mb-4 space-y-4">
        {messages.map((message) => (
          <div
            key={message.id}
            className={`flex ${message.isUser ? 'justify-end' : 'justify-start'}`}
          >
            <div
              className={`max-w-xs lg:max-w-md px-4 py-2 rounded-lg ${
                message.isUser
                  ? 'bg-blue-500 text-white'
                  : 'bg-gray-200 text-gray-800'
              }`}
            >
              <p>{message.text}</p>
            </div>
          </div>
        ))}
        {isTyping && (
          <div className="flex justify-start">
            <div className="bg-gray-200 text-gray-800 px-4 py-2 rounded-lg">
              <p>Agent is typing...</p>
            </div>
          </div>
        )}
      </div>

      <div className="flex gap-2">
        <input
          type="text"
          value={inputText}
          onChange={(e) => setInputText(e.target.value)}
          onKeyPress={(e) => e.key === 'Enter' && handleSend()}
          placeholder="Type your message..."
          className="flex-1 px-4 py-2 border border-gray-300 rounded-lg"
          disabled={isLoading}
        />
        <button
          onClick={handleSend}
          disabled={!inputText.trim() || isLoading}
          className="px-6 py-2 bg-blue-500 text-white rounded-lg hover:bg-blue-600 disabled:bg-gray-300"
        >
          Send
        </button>
      </div>
    </div>
  );
}

3. Server-Side Usage (API Route)

import { NextRequest, NextResponse } from 'next/server';
import { EnkaliPrimeClient } from '@enkaliprime/nextjs-chat-sdk';

export async function POST(request: NextRequest) {
  try {
    const { message, sessionId, context } = await request.json();

    if (!message || !sessionId) {
      return NextResponse.json(
        { error: 'Message and sessionId are required' },
        { status: 400 }
      );
    }

    const client = new EnkaliPrimeClient({
      unifiedApiKey: process.env.ENKALI_BRIDGE_API_KEY!,
      baseUrl: process.env.ENKALI_BRIDGE_BASE_URL!,
    });

    const response = await client.sendMessage(
      message,
      sessionId,
      context || [],
      false
    );

    return NextResponse.json({ message: response });
  } catch (error) {
    console.error('Chat API error:', error);
    return NextResponse.json(
      {
        error: error instanceof Error ? error.message : 'Failed to send message',
      },
      { status: 500 }
    );
  }
}

📚 API Reference

useEnkaliChat Hook

React hook for managing chat state in client components.

const {
  messages,        // Array of chat messages
  isLoading,       // Loading state
  error,           // Error message
  isTyping,        // Typing indicator state
  session,         // Current chat session
  sendMessage,      // Function to send messages
  createSession,   // Function to create chat session
  endSession,      // Function to end chat session
  clearHistory,    // Function to clear conversation history
  clearError,      // Function to clear errors
} = useEnkaliChat(config, options);

Multi-model routing with llmCouncil

To enable “LLM council” voting (useful for Ollama + cloud models), pass llmCouncil in the second argument:

const chat = useEnkaliChat(
  {
    unifiedApiKey: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_API_KEY!,
    baseUrl: process.env.NEXT_PUBLIC_ENKALI_BRIDGE_BASE_URL!,
    userId: 'user_123',
  },
  {
    llmCouncil: {
      strategy: 'vote',
      models: [
        { id: 'c1', provider: 'ollama', model: 'llama3' },
        { id: 'c2', provider: 'cloud', model: 'gpt-4.1-mini' },
      ],
      judge: { provider: 'cloud', model: 'gpt-4.1-mini', temperature: 0 },
      /**
       * Optional: internal aggregation step.
       * When enabled, the SDK runs an additional call that synthesizes
       * a single unified output using ALL candidate model responses.
       */
      aggregate: {
        provider: 'cloud',
        model: 'gpt-4.1-mini',
        temperature: 0,
        promptTemplate:
          'Synthesize a single unified answer for: "{{message}}". Using these candidates:\n\n{{candidates}}\n\nReturn ONLY the final merged answer in Markdown.',
      },
      subAgents: [
        {
          name: 'planner',
          prompt: 'Given the user message: "{{message}}", draft a high-quality answer plan based on: "{{current}}".',
          modelId: 'c1',
        },
      ],
    },
  }
);

LLM Council: Nuclear Research multi-stage pipeline (end-to-end example)

You can build a multi-model “research lab” app by combining:

  • llmCouncil for parallel role-based research + optional internal aggregation
  • sendMessage for sequential orchestration steps (planning, synthesis, visualization, expansion, export)

Architecture diagram

flowchart LR
  user[User Research Request] --> planner[Planner call (single model) + tasks list]
  planner --> council3[LLM Council (parallel role models via Ollama)]
  council3 --> aggregate4[Internal aggregation (unified report)]
  aggregate4 --> visualize5[Visualization spec + chart/table data extraction]
  visualize5 --> expand6[Knowledge expansion loop (role re-expansion)]
  expand6 --> assemble7[Final document assembly]
  assemble7 --> export[Export PDF/DOCX/PPT]

Step-by-step pipeline

  1. Planner (Task Decomposition)

    • Goal: break the request into research tasks and assign roles.
    • Implementation: one sendMessage call.
    • Output (example): a JSON plan containing tasks such as fundamentals, environment, economics, policy, case studies.
  2. Distributed research (Step 3)

    • Goal: run multiple models in parallel, each acting as a role specialist.
    • Implementation: one llmCouncil call.
    • Example roles:
      • ScientificResearch: deep technical facts + references
      • EnvironmentalAnalysis: coastal ecosystems + risks
      • DataAnalyst: numeric extraction + comparisons
      • PolicyAnalyst: governance + regulations
      • EnergySystems: feasibility inside coastal energy grids
    • How to set it up:
      • Put each role model into llmCouncil.models[].
      • Put role-specific instructions into your prompt (the prompt you send with sendMessage).
      • Optionally use subAgents to do iterative “knowledge expansion” steps inside each candidate.
  3. Reasoning Aggregation (Step 4)

    • Goal: merge all role outputs into one unified, conflict-resolved research document.
    • Implementation: enabled by llmCouncil.aggregate.
    • The SDK passes all candidate texts into an aggregation call and returns a single merged message.
  4. Visualization Engine (Step 5)

    • Goal: convert numeric insights into chart specs + chart-ready tables.
    • Implementation: another sendMessage call after the unified report.
    • Prompt contract (recommended):
      • Return a JSON object containing:
        • charts[] (type, title, datasets)
        • tables[] (rows, columns)
        • keyMetrics[] (for quick display)
  5. Knowledge Expansion Loop (Step 6)

    • Goal: ask each role to deepen its section using the unified report as context.
    • Implementation options:
      • Option A: call llmCouncil again with the same roles and prompt them:
        • “Expand your previous section using the unified report below…”
      • Option B: use subAgents in the same council call if you want fewer orchestration steps.
  6. Document Assembly (Step 7)

    • Goal: compile everything into a professional document with headings and references.
    • Implementation: final sendMessage call requesting a structured output format:
      • Markdown with required headings and a “References” section.
      • If your export tool needs JSON, request JSON instead.
  7. Export

    • Goal: export your compiled report and charts.
    • Implementation: do the export in your app (server-side) using libraries like:
      • PDF: pdf-lib, puppeteer, or a document renderer
      • DOCX: docx
      • Slides: pptxgenjs

Example orchestration prompt (role-based research)

Send a single sendMessage with a prompt like:

You are participating in a multi-model research council.
Your assigned role is determined by the model/provider you are running as.

USER RESEARCH REQUEST:
{{message}}

For your role:
- produce a structured section with headings
- include citations/references when available in the content you generate
- extract any numbers into a small “Key Data” subsection (for later charts)

Return your section as Markdown only.

Then rely on llmCouncil.aggregate to synthesize all sections into one unified report.

EnkaliPrimeClient Class

Main client class for interacting with EnkaliBridge API.

const client = new EnkaliPrimeClient({
  unifiedApiKey: 'ek_bridge_...',
  baseUrl: 'https://sdk.enkaliprime.com',
  userId: 'user_123', // Optional
});

// Send a message
const response = await client.sendMessage(
  'Hello!',
  'session_123',
  [], // context (optional)
  false // stream (optional)
);

📖 Full Documentation

For complete documentation, examples, and best practices, see:

  • NEXTJS.md - Complete Next.js integration guide
  • USE_CASES.md - Real-world use cases, implementation patterns, and error handling
  • WIDGET_USAGE.md - Complete guide to using ChatKit-style widgets in your application

🔑 Getting Your API Key

  1. Connect your application to a Widget in the EnkaliPrime dashboard
  2. Navigate to the EnkaliBridge API section
  3. Copy your unified API key (starts with ek_bridge_)

💡 Best Practices

  • ✅ Store API keys in environment variables
  • ✅ Use NEXT_PUBLIC_ prefix for client-side access
  • ✅ Use server-side API routes for sensitive operations
  • ✅ Handle errors gracefully
  • ✅ Validate user input

📚 Additional Resources


📄 License

MIT License - see LICENSE file for details.


Made with ❤️ by EnkaliPrime

WebsiteDocumentationSupport