@reminix/langgraph
v0.0.22
Published
Reminix agents for LangGraph - serve AI agents as REST APIs
Downloads
282
Maintainers
Readme
@reminix/langgraph
Reminix agents for LangGraph. Serve any LangGraph agent as a REST API.
Ready to go live? Deploy to Reminix Cloud for zero-config hosting, or self-host on your own infrastructure.
Installation
npm install @reminix/langgraph @langchain/langgraphThis will also install @reminix/runtime as a dependency.
Quick Start
Thread Agent (chat-style)
import { createReactAgent } from '@langchain/langgraph/prebuilt';
import { ChatOpenAI } from '@langchain/openai';
import { LangGraphThreadAgent } from '@reminix/langgraph';
import { serve } from '@reminix/runtime';
const llm = new ChatOpenAI({ model: 'gpt-4o' });
const graph = createReactAgent({ llm, tools: [] });
const agent = new LangGraphThreadAgent(graph, { name: 'my-agent' });
serve({ agents: [agent] });Workflow Agent (multi-step with interrupt/resume)
import { LangGraphWorkflowAgent } from '@reminix/langgraph';
import { serve } from '@reminix/runtime';
const graph = buildWorkflowGraph(); // your LangGraph compiled graph
const agent = new LangGraphWorkflowAgent(graph, { name: 'my-workflow' });
serve({ agents: [agent] });Your agent is now available at:
POST /agents/{name}/invoke- Execute the agent
API Reference
new LangGraphThreadAgent(graph, options?)
Create a LangGraph thread agent for chat-style interactions.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| graph | CompiledGraph | required | A LangGraph compiled graph |
| options.name | string | "langgraph-agent" | Name for the agent (used in URL path) |
| options.description | string | "langgraph thread agent" | Description shown in agent metadata |
| options.instructions | string | — | System instructions prepended to messages |
| options.tags | string[] | — | Tags for categorizing/filtering agents |
| options.metadata | Record<string, unknown> | — | Custom metadata merged into agent info |
Returns: LangGraphThreadAgent - A Reminix thread agent instance
The thread agent:
- Converts incoming messages to LangChain message format
- Prepends a
SystemMessageifinstructionsis set - Invokes the graph with
{ messages: [...] } - Extracts the last AI message from the response
- Returns it in the Reminix response format
new LangGraphWorkflowAgent(graph, options?)
Create a LangGraph workflow agent for multi-step execution with interrupt/resume support.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| graph | CompiledGraph | required | A LangGraph compiled graph |
| options.name | string | "langgraph-workflow-agent" | Name for the agent (used in URL path) |
| options.description | string | "langgraph workflow agent" | Description shown in agent metadata |
| options.instructions | string | — | Stored in metadata (not injected into graph execution) |
| options.tags | string[] | — | Tags for categorizing/filtering agents |
| options.metadata | Record<string, unknown> | — | Custom metadata merged into agent info |
Returns: LangGraphWorkflowAgent - A Reminix workflow agent instance
The workflow agent:
- Streams the graph and collects per-node outputs as steps
- Maps
GraphInterrupttopendingActionwith status"paused" - Accepts
resumeinput to continue interrupted graphs viaCommand - Returns structured
{status, steps, result?, pendingAction?}output
Endpoint Input/Output Formats
Thread Agent — POST /agents/{name}/invoke
Request:
{
"input": {
"messages": [
{"role": "user", "content": "Hello!"}
]
}
}Response:
{
"output": "Hello! How can I help you today?"
}Workflow Agent — POST /agents/{name}/invoke
Request:
{
"input": {"task": "process data"}
}Response:
{
"output": {
"status": "completed",
"steps": [
{"name": "fetch_data", "status": "completed", "output": {"records": 10}},
{"name": "process", "status": "completed", "output": {"summary": "done"}}
],
"result": {"summary": "done"}
}
}Resume a paused workflow:
{
"input": {
"task": "process data",
"resume": {"step": "approve", "input": {"approved": true}}
}
}Streaming
For streaming responses (thread agent only), set stream: true in the request:
{
"input": {
"messages": [{"role": "user", "content": "Hello!"}]
},
"stream": true
}The response will be sent as Server-Sent Events (SSE).
Runtime Documentation
For information about the server, endpoints, request/response formats, and more, see the @reminix/runtime package.
Deployment
Ready to go live?
- Deploy to Reminix Cloud - Zero-config cloud hosting
- Self-host - Run on your own infrastructure
Links
License
Apache-2.0
