@reminix/langchain
v0.0.18
Published
Reminix adapter for LangChain - serve agents as REST APIs
Maintainers
Readme
@reminix/langchain
Reminix Runtime adapter for LangChain. Serve any LangChain runnable as a REST API.
Ready to go live? Deploy to Reminix Cloud for zero-config hosting, or self-host on your own infrastructure.
Installation
npm install @reminix/langchain @langchain/coreThis will also install @reminix/runtime as a dependency.
Quick Start
import { ChatOpenAI } from '@langchain/openai';
import { serveAgent } from '@reminix/langchain';
const llm = new ChatOpenAI({ model: 'gpt-4o' });
serveAgent(llm, { name: 'my-chatbot', port: 8080 });For more flexibility (e.g., serving multiple agents), use wrapAgent and serve separately:
import { ChatOpenAI } from '@langchain/openai';
import { wrapAgent } from '@reminix/langchain';
import { serve } from '@reminix/runtime';
const llm = new ChatOpenAI({ model: 'gpt-4o' });
const agent = wrapAgent(llm, 'my-chatbot');
serve({ agents: [agent], port: 8080 });Your agent is now available at:
POST /agents/my-chatbot/invoke- Execute the agent
API Reference
serveAgent(runnable, options)
Wrap a LangChain runnable and serve it immediately. Combines wrapAgent and serve for single-agent setups.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| runnable | Runnable | required | Any LangChain runnable (LLM, chain, agent, etc.) |
| options.name | string | "langchain-agent" | Name for the agent (used in URL path) |
| options.port | number | 8080 | Port to serve on |
| options.hostname | string | "0.0.0.0" | Hostname to bind to |
wrapAgent(runnable, name)
Wrap a LangChain runnable for use with Reminix Runtime. Use this with serve from @reminix/runtime for multi-agent setups.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| runnable | Runnable | required | Any LangChain runnable (LLM, chain, agent, etc.) |
| name | string | "langchain-agent" | Name for the agent (used in URL path) |
Returns: LangChainAgentAdapter - A Reminix adapter instance
Example with a Chain
import { ChatOpenAI } from '@langchain/openai';
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { wrapAgent } from '@reminix/langchain';
import { serve } from '@reminix/runtime';
// Create a chain
const prompt = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant.'],
['human', '{input}'],
]);
const llm = new ChatOpenAI({ model: 'gpt-4o' });
const chain = prompt.pipe(llm);
// Wrap and serve
const agent = wrapAgent(chain, 'my-chain');
serve({ agents: [agent], port: 8080 });Endpoint Input/Output Formats
POST /agents/{name}/invoke
Execute the agent. Input keys are passed directly to the LangChain runnable.
Request:
{
"input": "Hello, how are you?"
}Response:
{
"output": "I'm doing well, thank you for asking!"
}Streaming
For streaming responses, set stream: true in the request:
{
"input": "Tell me a story",
"stream": true
}The response will be sent as Server-Sent Events (SSE).
Runtime Documentation
For information about the server, endpoints, request/response formats, and more, see the @reminix/runtime package.
Deployment
Ready to go live?
- Deploy to Reminix Cloud - Zero-config cloud hosting
- Self-host - Run on your own infrastructure
Links
License
Apache-2.0
