llm-layer-engine
v1.0.1
Published
A lightweight, type-safe AI workflow engine for Node.js and TypeScript. Build controllable LLM agents with tool calling, ReAct loops, memory, permissions, retry logic, and execution stats — built to work alongside LangChain.
Maintainers
Readme
<<<<<<< HEAD
⚙️ @llmlayer/engine
⚙️ llm-layer-engine
llm-layer-engine is a lightweight, type-safe AI workflow engine for Node.js and TypeScript — built to sit on top of LangChain and give your backend full control over how AI agents think, act, and execute.
📖 Documentation: layerengine.dev — installation, use cases, LangChain integration, and full API reference.
LangChain gives you the model. llm-layer-engine gives you the control layer on top of it.
⚡️ Quick Install
npm install llm-layer-engine
# or
pnpm add llm-layer-engine
# or
yarn add llm-layer-engineRequires: Node.js >=18 · TypeScript >=5
🚀 Why llm-layer-engine?
LangChain handles models, embeddings, and integrations well. But building a production-ready agent loop on top of it — with tool calling, memory, per-run stats, permission gates, and retry logic — still takes a lot of custom code.
llm-layer-engine removes that work. One import. Everything included.
For integration and full use-case examples → Check the Docs
Use it for:
- ReAct Agent Loops — Multi-step reasoning + acting with
maxStepscontrol and automatic tool execution. - Tool Registry — Register tools once at startup. The engine pulls them automatically across your entire backend.
- Execution Stats — Every run returns
tokenUsage,durationMs,steps,toolCalls, anderrors— pipe directly into your observability stack. - Memory — Short-term in-memory session history. Long-term adapter interface — plug in Redis, Postgres, or Pinecone with zero engine changes.
- Permission + Approval Gates — Block or approve tool execution per user role or custom logic before anything runs.
- Retry Handler — Exponential backoff built for LLM API calls that rate-limit or timeout.
- Structured Logger — Per-environment logging with granular control over steps, tools, and errors.
🤝 Built to Work With LangChain
llm-layer-engine is not a LangChain replacement. It's the control layer that sits on top.
Wrap any LangChain model into the LLMProvider interface and get full loop control, tool execution, and stats instantly — without touching your existing LangChain setup.
Full integration guide → LangChain + llm-layer-engine
🌐 Supported Environments
- Node.js (ESM) — 18.x, 20.x, 22.x
- Express.js
- Fastify
- Any TypeScript Node.js runtime
📖 Resources
| | | |---|---| | Getting Started | quickstart → | | Quick Start | guide → | | GitHub | llm-layer-engine → | | Report an Issue | issues → |
💁 Contributing
Open to contributions — features, fixes, or docs improvements are all welcome.
See contributing guidelines to get started.
License
MIT — Muhammad Burhan Chughtai
