@charivo/llm-client-remote
v0.0.1
Published
Remote LLM client for Charivo (calls server API)
Readme
@charivo/llm-client-remote
Remote HTTP LLM client for Charivo (client-side).
Features
- 🔐 Secure - API keys stay on server
- 🌐 HTTP-based - Works with any server endpoint
- 🎯 Type-Safe - Full TypeScript support
- 🔌 Flexible - Use any LLM provider on the backend
Installation
pnpm add @charivo/llm-client-remote @charivo/coreUsage
Client-side Setup
import { createRemoteLLMClient } from "@charivo/llm-client-remote";
import { createLLMManager } from "@charivo/llm-core";
const client = createRemoteLLMClient({
apiEndpoint: "/api/chat" // Your server endpoint
});
const llmManager = createLLMManager(client);
llmManager.setCharacter({
id: "assistant",
name: "Hiyori",
personality: "Cheerful and helpful"
});
const response = await llmManager.generateResponse({
id: "1",
content: "Hello!",
timestamp: new Date(),
type: "user"
});Server-side Implementation (Required)
Use @charivo/llm-provider-openai for easy setup:
// app/api/chat/route.ts (Next.js)
import { NextRequest, NextResponse } from "next/server";
import { createOpenAILLMProvider } from "@charivo/llm-provider-openai";
const provider = createOpenAILLMProvider({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4"
});
export async function POST(request: NextRequest) {
try {
const { messages } = await request.json();
const message = await provider.generateResponse(messages);
return NextResponse.json({ success: true, message });
} catch (error) {
return NextResponse.json(
{ success: false, error: "Chat failed" },
{ status: 500 }
);
}
}API Reference
Constructor
new RemoteLLMClient(config: RemoteLLMConfig)Configuration Options
interface RemoteLLMConfig {
/** Server API endpoint (default: "/api/chat") */
apiEndpoint?: string;
/** Request timeout in ms (default: 30000) */
timeout?: number;
}Methods
call(messages)
Send messages to the server and get a response.
const response = await client.call([
{ role: "user", content: "Hello!" },
{ role: "assistant", content: "Hi there!" },
{ role: "user", content: "How are you?" }
]);Complete Example
Client (React)
import { Charivo } from "@charivo/core";
import { createLLMManager } from "@charivo/llm-core";
import { createRemoteLLMClient } from "@charivo/llm-client-remote";
function App() {
const [charivo] = useState(() => {
const charivo = new Charivo();
const client = createRemoteLLMClient({
apiEndpoint: "/api/chat"
});
const llmManager = createLLMManager(client);
charivo.attachLLM(llmManager);
charivo.setCharacter({
id: "hiyori",
name: "Hiyori",
personality: "Cheerful AI assistant"
});
return charivo;
});
const handleSend = async (message: string) => {
await charivo.userSay(message, "hiyori");
console.log(response);
};
return <ChatUI onSend={handleSend} />;
}Server (Next.js API Route)
// app/api/chat/route.ts
import { NextRequest, NextResponse } from "next/server";
import { createOpenAILLMProvider } from "@charivo/llm-provider-openai";
const provider = createOpenAILLMProvider({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4"
});
export async function POST(request: NextRequest) {
const { messages } = await request.json();
const message = await provider.generateResponse(messages);
return NextResponse.json({ success: true, message });
}Why Use Remote Client?
Security ✅
- API keys never exposed to client
- Server-side authentication
- Rate limiting per user
Flexibility ✅
- Switch LLM providers without client changes
- Server-side prompt engineering
- Response caching and optimization
- Custom business logic
Cost Control ✅
- Monitor and limit API usage
- Implement quotas per user
- Cache common responses
- Optimize token usage
Error Handling
try {
const response = await client.call(messages);
} catch (error) {
if (error.response?.status === 429) {
console.error("Rate limit exceeded");
} else if (error.response?.status === 500) {
console.error("Server error");
} else {
console.error("Chat failed:", error);
}
}Custom Backend
You can use any backend that returns a response:
// Your custom API
export async function POST(request: Request) {
const { messages } = await request.json();
// Call any LLM API (Anthropic, Cohere, etc.)
const response = await yourLLMAPI.generateCompletion(messages);
return Response.json({ success: true, message: response });
}Related Packages
@charivo/llm-provider-openai- Server-side OpenAI provider@charivo/llm-client-openai- Direct OpenAI client (not recommended for production)@charivo/llm-core- LLM core functionality
License
MIT
