glirastes
v0.3.2
Published
Official Chainmatics SDK for building AI-powered chat, tools, and agent systems. Part of the Glirastes platform.
Maintainers
Readme
What is this?
glirastes is a TypeScript SDK for building AI-powered chat and tool systems. You describe what the AI can do with a handful of declarative tool definitions, and the SDK turns them into Vercel AI SDK-compatible tools that call your APIs, stream responses to the client, and trigger UI updates — all type-safe, all in one place. If your backend already runs LangGraph instead, the same chat UI works against it via the LangGraph transport — no migration required.
It ships as a single npm package with subpath exports:
import { defineEndpointTool } from 'glirastes';
import { createAiChatHandler } from 'glirastes/server/nextjs';
import { VercelAiChat } from 'glirastes/react/vercel';The server bits run in Node (Next.js App Router, NestJS, or any framework). The client bits are React. Everything else — PII protection, intent routing, guardrails, approvals — is optional and unlocked via the Glirastes platform.
Why glirastes?
Building an AI feature means wiring together tool definitions, API calls, streaming responses, UI reactions, and — once real users hit it — PII handling, approval flows, guardrails, and observability. Every. Single. Time.
Without glirastes
- Manual tool/function definitions
- Hand-wired UI action dispatch
- Custom PII redaction
- Framework-specific glue
- Bespoke approval flows
- DIY observabilityWith glirastes
✦ One declarative definition per tool
✦ Auto UI action dispatch
✦ Optional PII Shield
✦ Next.js / NestJS / LangGraph adapters
✦ Pro approval flows via platform
✦ Built-in telemetryQuick Start (Next.js or NestJS)
Which adapter should I use?
If you only have one server, the answer is "that one". If you have both a Next.js frontend and a separate backend (NestJS, Express, …), put the chat route on the backend:
| Setup | Use this adapter |
|---|---|
| Next.js full-stack (no separate backend) | glirastes/server/nextjs |
| Next.js frontend + NestJS backend | glirastes/server/nestjs ✅ |
| Pure NestJS (Swagger UI as client) | glirastes/server/nestjs |
| Express backend + any frontend | glirastes/server (manual wiring) |
Why prefer the real backend over a Next.js Route Handler?
- Tools are annotated controllers. Your existing endpoints become AI-callable in place — no second code path, no
defineEndpointToolmirroring controller URLs. - One auth source. Your JWT guard / tenant context already runs there. Don't re-validate tokens in a Next.js route just to forward them.
- No extra HTTP hop. Browser → backend → DB instead of Browser → Next.js → backend → DB on every tool call.
- Tools can call services directly. No HTTP membrane between the AI tool and your business logic.
- One observability pipeline. Logs, traces, rate limits where they already live.
The Next.js Route Handler is right when Next.js is your backend, when you're prototyping, or when every tool is an external API call anyway.
Pick your stack — both come up in the same four steps.
1. Install
# Backend only (NestJS / Next.js Route Handlers)
npm install glirastes
# Frontend / chat UI consumers — add the React UI peers
npm install glirastes @ai-sdk/react react-markdownThe Vercel AI SDK (ai) is bundled. @ai-sdk/react and react-markdown are declared as optional peer dependencies so backend-only consumers don't have a phantom React tree in their node_modules. Install them yourself when you use any glirastes/react/* subpath.
Requirements: Node.js
>=20. The React chat UI needsreactandreact-dom^19— Next.js apps already have these.Optional — only if you want voice input with a live waveform:
npm install wavesurfer.js
2. Define a tool
The SDK uses the declared method + path (or the route the decorator already exposes) to call your own API when the LLM picks this tool — you never write a fetch yourself.
// app/api/tasks/ai-tool.ts
import { z } from 'zod';
import { defineEndpointTool } from 'glirastes';
export const listTasks = defineEndpointTool({
id: 'tasks.list',
toolName: 'list_tasks',
description: 'List tasks with an optional status filter.',
method: 'GET',
path: '/api/tasks',
inputSchema: z.object({
status: z.enum(['open', 'done']).optional(),
}),
});// tasks.controller.ts
import { Controller, Get, Query } from '@nestjs/common';
import { AiModule, AiTool } from 'glirastes/server/nestjs';
@Controller('tasks')
@AiModule({
intent: 'task_management',
classification: {
hint: 'User asks about listing or filtering tasks.',
examples: ['show me open tasks', 'what is done?', 'list my tasks'],
},
})
export class TaskController {
@Get()
@AiTool({
name: 'list_tasks',
description: 'List tasks with an optional status filter.',
})
list(@Query('status') status?: 'open' | 'done') {
return this.taskService.list(status);
}
}3. Drop a chat widget into your frontend
'use client';
import { VercelAiChat } from 'glirastes/react/vercel';
export default function RootLayout({ children }) {
return (
<body>
{children}
<VercelAiChat endpoint="/api/chat" />
</body>
);
}4. Wire up the chat handler
import { createAiChatHandler } from 'glirastes/server/nextjs';
import { endpointToolsToRegistry } from 'glirastes/server';
import { listTasks } from './tasks/ai-tool';
export const POST = createAiChatHandler({
tools: endpointToolsToRegistry([listTasks]),
});import { scanNestJsControllers } from 'glirastes/server/nestjs';
import { endpointToolsToRegistry } from 'glirastes/server';
import { TaskController } from './tasks/tasks.controller';
const scan = scanNestJsControllers({ controllers: [TaskController] });
const registry = endpointToolsToRegistry(scan.tools);
// Pass `registry` to createAiChatHandler in your chat route.Done. AI chat is live, streaming, and your list_tasks tool is callable. Add more tools the same way — pick the style that matches your stack. See NestJS with decorators below for DTOs, @AiParam, and multi-controller scanning.
One more step for Next.js once you have more than a handful of tools. The manual
endpointToolsToRegistry([listTasks, …])import list above stops scaling around 5–10 tools. Today's required pattern: write each tool in a co-located*.ai-tool.tsfile and run the CLI codegen as apredev/prebuildhook to aggregate them into a generated registry your chat route imports. See Scaling up — required codegen workflow for Next.js below for the full setup. NestJS users skip this entirely — the runtime decorator scan handles aggregation automatically.Coming in 0.4.0: a
buildToolRegistryhelper backed byimport.meta.glob/require.contextthat drops the codegen step for Next.js too. Track at chainmatics/glirastes#issues.
Setup notes
tsconfig — modern is best, legacy still works
Glirastes ships both ESM (dist/) and CommonJS (dist/cjs/) with a conditional exports map, plus subpath shim directories at the package root for legacy resolvers. Both setups below are supported:
Modern (recommended):
{
"compilerOptions": {
"module": "node16", // or "nodenext" / "esnext"
"moduleResolution": "node16", // or "nodenext" / "bundler"
"target": "ES2022",
"esModuleInterop": true,
"experimentalDecorators": true, // NestJS only
"emitDecoratorMetadata": true // NestJS only
}
}Legacy NestJS template (default module: commonjs): works as-is — no tsconfig changes required. The shim directories under glirastes/server/, glirastes/react/ etc. let the classic Node resolver find each subpath, and the CommonJS build under dist/cjs/ provides the runtime artifacts.
Which subpath do I import from?
| Where the code runs | Import from |
|---|---|
| Anywhere (tool definitions, types, helpers) | glirastes |
| Server-side core (registry helpers, pipeline) | glirastes/server |
| Next.js App Router route handlers | glirastes/server/nextjs |
| NestJS controllers / modules | glirastes/server/nestjs |
| Test suite without LLM calls | glirastes/server/testing |
| React components, client hooks | glirastes/react |
| Vercel AI SDK chat widget | glirastes/react/vercel |
| LangGraph chat widget | glirastes/react/langgraph |
| Default chat panel CSS | glirastes/react/styles.css |
Core Concepts
Tool
A tool is something the AI can do. It has a name, a description (so the LLM knows when to use it), an input schema (Zod — validated at the type layer and at runtime), and logic for what happens when it runs.
Two flavors cover ~95% of real-world cases:
Endpoint Tool — talks to your backend
import { z } from 'zod';
import { defineEndpointTool } from 'glirastes';
export const listInvoices = defineEndpointTool({
id: 'invoices.list',
toolName: 'list_invoices',
module: 'billing',
description: 'List invoices with optional status filter.',
method: 'GET',
path: '/api/invoices',
inputSchema: z.object({
status: z.enum(['paid', 'open', 'overdue']).optional(),
}),
uiPattern: {
type: 'filter-and-navigate',
target: 'invoices',
filterMapping: { status: 'status' },
},
});You declare the HTTP method and path; the SDK generates the execute function that calls your API. You never write fetch boilerplate for a tool.
UI Tool — fires frontend actions
import { z } from 'zod';
import { defineUiTool } from 'glirastes';
export const openSettings = defineUiTool({
id: 'settings.open',
toolName: 'open_settings',
description: 'Open the settings dialog for the current user.',
inputSchema: z.object({}),
uiAction: { type: 'open-dialog', target: 'settings' },
});No API call. The tool result contains a uiAction payload that's dispatched to registered React handlers via the UiActionBus. This is how "LLM opens a dialog" or "LLM navigates the user" works without round-tripping through your backend.
Registry
A registry is a key-value map (tool name → tool object) that the SDK uses internally. Build one with endpointToolsToRegistry(...) or uiToolsToRegistry(...):
import { endpointToolsToRegistry, uiToolsToRegistry } from 'glirastes/server';
const endpoints = endpointToolsToRegistry([listInvoices, createInvoice]);
const ui = uiToolsToRegistry([openSettings]);
const registry = { ...endpoints, ...ui };Pass the merged registry to createAiChatHandler({ tools: registry }) and the SDK converts it to AI SDK format at request time.
UI Patterns
Declarative rules for what the frontend should do after a tool runs. Instead of writing custom dispatch code per tool, declare a pattern and the SDK resolves it automatically:
| Pattern | What it does |
|---|---|
| filter-and-navigate | Navigate to a page with filters pre-applied |
| open-detail | Open a detail view for a specific entity |
| open-dialog | Open a modal dialog |
| refresh | Refresh a specific UI region |
| toast | Show a notification |
Patterns support $variable placeholders that get replaced with values from the tool input or response. Example: path: '/tasks/$taskId' becomes /tasks/abc-123 at runtime.
UI Action Dispatch
On the frontend, register handlers for action IDs using the useAiClientAction hook:
import { useAiClientAction } from 'glirastes/react';
function TaskList() {
useAiClientAction('task-details.open', (payload) => {
router.push(`/tasks/${payload?.taskId}`);
});
useAiClientAction('task-list.filter', (payload) => {
setFilters(payload);
});
return <div>{/* your task list */}</div>;
}When a tool returns a uiAction with a matching action ID, your handler fires. That's the full bridge between the LLM and your UI.
Modules (Intent Routing)
Modules group tools by topic — e.g., a billing module contains list_invoices, create_invoice, send_reminder. When a user sends a message, the SDK can classify which module the message belongs to and scope the LLM's tool list to that module alone. Fewer tools = cheaper, faster, more accurate responses.
Module assignment is declared per-tool via the module field. Intent classification is a Pro feature that runs through the Glirastes platform via Lancer (see Advanced).
Examples
Full Next.js backend with the pipeline
The pipeline orchestrates guardrails → intent classification → tool scoping → model selection in one call. It's the recommended setup once you have more than a handful of tools.
// app/api/chat/route.ts
import { createAiChatHandler } from 'glirastes/server/nextjs';
import { createAiPipeline, endpointToolsToRegistry } from 'glirastes/server';
import { createLancer } from 'glirastes/server/lancer';
import { endpointTools } from '@/lib/ai-tools';
import { modules } from '@/lib/modules';
const lancer = createLancer({
apiKey: process.env.GLIRASTES_API_KEY,
});
const pipeline = createAiPipeline({
lancer,
modules,
defaultModule: 'general',
});
const registry = endpointToolsToRegistry(endpointTools);
export const POST = createAiChatHandler({
tools: registry,
pipeline,
});Without an API key, the pipeline still works — every Lancer call falls back to a safe local default. Guardrails pass, classification returns low confidence, telemetry is silently dropped. You can develop and ship without ever talking to the platform.
NestJS with decorators
Prefer declaring tools where the endpoint lives? Use NestJS decorators. Three pieces: @AiModule on the controller, @AiTool on the method, @AiParam on the DTO properties.
// tasks.controller.ts
import { Controller, Post, Body, Get, Param } from '@nestjs/common';
import { AiModule, AiTool } from 'glirastes/server/nestjs';
@Controller('tasks')
@AiModule({
intent: 'task_management',
classification: {
hint: 'User wants to create, update, or query tasks.',
examples: ['create a task', 'mark task as done', 'list my tasks'],
},
})
export class TaskController {
@Post()
@AiTool({
name: 'create_task',
description: 'Create a new task',
})
async create(@Body() dto: CreateTaskDto) {
return this.taskService.create(dto);
}
// Endpoints without @AiTool are not exposed to the AI
@Get(':id/history')
getHistory(@Param('id') id: string) {
return this.taskService.getHistory(id);
}
}// create-task.dto.ts
import { IsString, IsOptional } from 'class-validator';
import { AiParam } from 'glirastes/server/nestjs';
export class CreateTaskDto {
@IsString()
@AiParam('The task title. Required. Ask the user if missing.')
title!: string;
@IsString()
@IsOptional()
@AiParam('Optional due date in ISO 8601 format.')
dueDate?: string;
}@AiParam descriptions are what the LLM sees when deciding whether and how to call the tool — write them as instructions for the model, not for humans. The input schema is auto-derived from the DTO's class-validator decorators.
Wire it up in your AiChatModule:
import { scanNestJsControllers } from 'glirastes/server/nestjs';
import { endpointToolsToRegistry } from 'glirastes/server';
const scan = scanNestJsControllers({
controllers: [TaskController, /* ...other controllers */],
});
const registry = endpointToolsToRegistry(scan.tools);scanNestJsControllers returns { tools, modules, toolsByModule } — pass scan.tools to endpointToolsToRegistry, then hand registry to createAiChatHandler({ tools: registry }). No separate ai-tool.ts files, no manual registration.
Client-side action handlers
// components/chat-integration.tsx
'use client';
import { useAiClientAction } from 'glirastes/react';
import { useRouter } from 'next/navigation';
import { useQueryClient } from '@tanstack/react-query';
export function ChatIntegration() {
const router = useRouter();
const qc = useQueryClient();
useAiClientAction('task-details.open', (payload) => {
router.push(`/tasks/${payload?.taskId}`);
});
useAiClientAction('task-list.refresh', () => {
qc.invalidateQueries({ queryKey: ['tasks'] });
});
useAiClientAction('open-dialog.settings', () => {
// open your settings modal
});
return null;
}Render this component anywhere inside your layout. Handlers fire whenever a tool result contains a matching uiAction.
OpenAPI code generation
Already have an OpenAPI spec? Annotate your endpoints with x-ai extensions:
{
"paths": {
"/api/tasks": {
"post": {
"operationId": "createTask",
"x-ai": {
"enabled": true,
"toolName": "create_task",
"description": "Create a new task",
"module": "task_management"
}
}
}
}
}Then generate tool definitions automatically:
npx glirastes generate --input openapi.json --output src/generated/tools.tsThe generator converts OpenAPI parameter schemas to Zod, preserves x-ai metadata, and outputs a ready-to-import registry file.
Testing without LLM calls
import { createAiTestSuite } from 'glirastes/server/testing';
import { endpointTools } from '@/lib/ai-tools';
import { modules } from '@/lib/modules';
const suite = createAiTestSuite({
tools: endpointTools,
modules,
});
suite.expectRouted('show me open tasks', 'task_query');
suite.expectNeedsApproval('delete_task');
suite.expectNoApproval('list_tasks');
suite.expectToolSchema('create_task').toAccept({ title: 'Buy milk' });All assertions run without calling a real LLM or hitting HTTP — perfect for CI.
Advanced
LangGraph backend
If your server already runs LangGraph, swap the drop-in widget for the LangGraph transport. Your LangGraph agent handles the model calls and tool execution; glirastes handles the chat UI and streaming transport:
'use client';
import { LangGraphAiChat } from 'glirastes/react/langgraph';
export default function RootLayout({ children }) {
return (
<body>
{children}
<LangGraphAiChat endpoint="/api/chat" />
</body>
);
}No ai / @ai-sdk/react imports needed — the LangGraph transport is pure HTTP and streams on its own.
Customizing the chat UI
VercelAiChat and LangGraphAiChat are thin wrappers around FloatingChatUI and accept all its layout props directly:
<VercelAiChat
endpoint="/api/chat"
title="Ask the assistant"
defaultOpen={false}
draggable={true}
resizable={true}
size={{ width: 480, height: 640 }}
shortcut="mod+k"
/>| Prop | What it does |
|---|---|
| title | Header text. Default: "AI Assistant" |
| defaultOpen | Whether the window starts open. Default: false |
| draggable | Window draggable by its header. Default: true |
| resizable | Window resizable from corners. Default: true |
| size | { width, height } or a responsive size config |
| shortcut | Keyboard shortcut to toggle, e.g. 'mod+k'. Pass false to disable |
| hideTriggerWhenOpen | Hide the floating trigger pill while the window is open |
| showClearButton | Show the clear/trash button in the header. Default: true |
| showSessionSwitcher | true, false, or 'auto' (shows if sessions are configured) |
| headerActions | Replace the default header actions with a custom ReactNode |
| triggerIcon | Custom icon inside the floating trigger pill |
Styling
Every visual slot accepts a class name through the classNames prop. Works with Tailwind, CSS Modules, or any class-based styling:
<VercelAiChat
endpoint="/api/chat"
classNames={{
window: 'w-[500px] h-[640px] bg-slate-900 border-slate-700 shadow-2xl',
windowHeader: 'bg-slate-800 text-slate-100',
messageBubble: 'rounded-xl',
input: 'bg-slate-800 text-slate-100',
trigger: 'bottom-8 right-8',
triggerPill: 'bg-indigo-600 hover:bg-indigo-500',
}}
/>Available slots: window, windowHeader, windowBody, panel, messageList, messageBubble, input, approvalCard, bulkApprovalCard, suggestionChip, mentionChip, pipelineTimeline, trigger, triggerPill, switcher. Setting classNames.window drops the SDK's inline sizing styles so your classes win.
You can also bring your own stylesheet via import 'glirastes/react/styles.css' for the default look, or skip it entirely and style everything from scratch.
Persisting chat across sessions
Every widget accepts a session config. The simplest version uses localStorage:
<VercelAiChat
endpoint="/api/chat"
session={{
save: (id, msgs) => localStorage.setItem(id, JSON.stringify(msgs)),
load: (id) => {
const v = localStorage.getItem(id);
return v ? JSON.parse(v) : null;
},
}}
/>For multi-device sync, point save and load at your own REST endpoints that persist messages to a database. The SDK also supports a full session-list bridge (list, create, delete, rename) for apps that show a sidebar of past conversations — see SessionConfig in the source for the full shape.
Manual wiring (sharing chat state with other components)
If you need to trigger chat programmatically from a button, show unread counts in the header, or render transcripts somewhere outside the chat panel, skip the drop-in widget and wire things manually:
'use client';
import { AiChatProvider, AiChatPanel, useAiChat } from 'glirastes/react';
import { useVercelAiChatTransport } from 'glirastes/react/vercel';
export function ChatShell({ children }: { children: React.ReactNode }) {
const transport = useVercelAiChatTransport({ endpoint: '/api/chat' });
return (
<AiChatProvider transport={transport}>
{children}
<AiChatPanel />
</AiChatProvider>
);
}
// Anywhere inside the provider:
function HeaderBadge() {
const { messages, isLoading } = useAiChat();
return <span>{isLoading ? '•••' : messages.length}</span>;
}PII Shield — protect personal data from the LLM
import { createPiiShield, createAiChatHandler } from 'glirastes/server';
import { createLancer } from 'glirastes/server/lancer';
const lancer = createLancer({ apiKey: process.env.GLIRASTES_API_KEY });
const piiShield = createPiiShield({
lancer, // delegate detection to Aegis (Pro)
mode: 'pseudonymize', // same PII → same pseudonym across a conversation
});
export const POST = createAiChatHandler({
tools: registry,
piiShield,
});User sends "Email the invoice to [email protected]" → shield replaces [email protected] with Email_A before the LLM sees it → LLM calls send_email({ to: 'Email_A' }) → shield rehydrates the real address before your API is called.
The free tier provides a local detector if you skip Lancer. Not as accurate as Aegis but zero-dependency and offline-safe.
Approvals — human-in-the-loop for sensitive tools
Some tools shouldn't run just because the LLM decided to. delete_user should require human confirmation. Set it up via Lancer approvals (Pro):
const lancer = createLancer({
apiKey: process.env.GLIRASTES_API_KEY,
approvals: {
policies: {
'users.delete': 'require',
'billing.charge': 'require',
'users.list': 'auto',
},
},
});The chat UI renders an <ApprovalCard> in the conversation — "The AI wants to delete user X. Approve or Reject?" — and pauses execution until the user clicks. If Lancer is unavailable, the SDK falls back to the local needsApproval flag (deprecated but still functional).
Followup Suggestions
After the LLM responds, it can suggest follow-up questions as clickable chips. This is powered by an internal suggest_followups tool that the LLM calls at the end of a turn. Configurable count (default 3, max 5), locale, and good/bad examples.
Enable via:
createAiChatHandler({
tools: registry,
followups: { enabled: true, count: 3, locale: 'en-US' },
});Voice input
import { VoiceInputButton } from 'glirastes/react';
<VoiceInputButton
onTranscript={(text) => console.log(text)}
language="en"
baseUrl="/api/ai/speech-transcribe"
getToken={() => authToken}
/>The voice button streams audio to your backend, which proxies it to a speech-to-text provider (Deepgram, etc.). The API key never leaves the server.
Install
wavesurfer.js(npm install wavesurfer.js) to get the live waveform visualization during recording. Without it the button still works — it just falls back to a static recording indicator.
Scaling up — required codegen workflow for Next.js
Hand-importing every tool definition into your chat route doesn't scale past a handful. In Glirastes 0.3.x the canonical pattern for Next.js consumers is to run the CLI codegen as a build hook — it generates a registry from co-located *.ai-tool.ts files. The chat route imports the generated registry instead of maintaining a manual import list.
NestJS users skip this section entirely.
AiChatModule.forRoot()discovers@AiTool-decorated controllers at runtime via the NestJSDiscoveryService— no codegen step is ever needed.Coming in 0.4.0: a
buildToolRegistryhelper based onimport.meta.glob/require.contextwill let Next.js consumers skip the codegen step too. Until then the steps below are required for any non-trivial Next.js app.
1. Bootstrap a tool from an existing route
npx glirastes scaffold --route src/app/api/tasks/route.ts
# --dry-run to preview, --force to overwriteProduces src/app/api/tasks/ai-tool.ts with a starter defineEndpointTool({ ... }) call — input schema derived from your route, TODO description, sensible defaults. Edit, save.
2. Wire the codegen into your scripts
{
"scripts": {
"predev": "glirastes generate-tools --quiet",
"prebuild": "glirastes generate-tools --quiet",
"ai:scaffold": "glirastes scaffold",
"ai:coverage": "glirastes coverage",
"ai:coverage:ci": "glirastes coverage --ci",
"ai:validate": "glirastes validate-tools",
"ai:validate:ci": "glirastes validate-tools --ci",
"ai:test": "glirastes test"
}
}predev / prebuild regenerate src/generated/ai-tools/{endpoint,ui,modules,action-ids}.generated.ts on every dev start and CI build — the generated registry becomes source-of-truth.
3. Import the registry in your chat route
// app/api/chat/route.ts (Next.js)
import { createAiChatHandler } from 'glirastes/server/nextjs';
import { endpointToolRegistry } from '@/generated/ai-tools/endpoint.generated';
import { uiToolRegistry } from '@/generated/ai-tools/ui.generated';
export const POST = createAiChatHandler({
tools: { ...endpointToolRegistry, ...uiToolRegistry },
model: openai('gpt-4o-mini'),
});NestJS users skip this entire workflow: scanNestJsControllers reads your @AiTool-decorated controllers at runtime. No co-located files, no *.generated.ts. See NestJS with decorators.
4. CI gates
npm run ai:coverage:ci # fails if any route lacks an ai-tool.ts (or has an orphan handler)
npm run ai:validate:ci # fails if a tool definition is malformed or path-mismatchedFor the full deep-dive — UI patterns, intent modules, approval flow, naming conventions, deterministic testing — drop the maintaining-glirastes-tools skill into your agent. See Agent skills below.
CLI
The SDK includes a glirastes CLI for scaffolding, code generation, and validation. Run any command with npx glirastes <command> or bunx glirastes <command>:
| Command | What it does |
|---|---|
| generate | Generate endpoint tools from an OpenAPI spec |
| generate-tools | Scan *.ai-tool.ts files, emit registry |
| generate-endpoint-tools | Endpoint tool registry only |
| generate-ui-tools | UI tool registry + action IDs |
| generate-modules | Module (intent) registry |
| generate-skills | Export tools as Claude Code / Codex skill files |
| generate-mcp-server | Export tools as a standalone MCP server project |
| generate-auto | Auto-detect layout and run the right generator |
| install-skills | Install the bundled agent skills under skills/ into your repo (e.g. .claude/skills/) |
| validate | Validate OpenAPI spec for x-ai correctness |
| validate-tools | Validate AI tool configuration |
| coverage | Report which API routes have AI tools and which don't |
| scaffold | Generate an ai-tool.ts template from an existing route |
| test | Scaffold an AI behavior test file |
| check-upgrade | Analyze release notes for breaking changes |
| sync | Upload tool schemas to the Glirastes platform |
Agent skills
Glirastes ships pre-built agent skills under skills/ that teach a coding agent (Claude Code, Codex, Cursor, …) how to integrate, maintain, and customize the SDK. Drop them into your repo and the agent picks the right one automatically.
| Skill | Use when |
|---|---|
| integrating-glirastes-nextjs | Adding chat to a Next.js app where Next.js is the backend |
| integrating-glirastes-nestjs | Adding chat to a NestJS backend (with or without a frontend) |
| integrating-glirastes-nextjs-with-nestjs-backend | Full-stack setup: Next.js frontend + separate NestJS backend |
| maintaining-glirastes-tools | Authoring ai-tool.ts, intent modules, codegen, all CLI commands |
| building-glirastes-chat-ui | Customizing the chat UI — theming, mentions, voice input, action bus |
Install via the bundled CLI
After npm install glirastes, run the bundled installer. Two scopes:
- Project-local (default): writes to
./.claude/skills. Each project gets the skills tied to its own installedglirastesversion. Best for teams checking the directory into git. - Global (
-g): writes to~/.claude/skills. Shared across every project on the machine. Best for solo work across multiple Glirastes projects.
# Project-local — installs into ./.claude/skills (default)
npx glirastes install-skills
# Global — installs into ~/.claude/skills (shared across all projects on this machine)
npx glirastes install-skills -g
# Only the skills relevant to your stack
npx glirastes install-skills --stack nextjs # Next.js standalone
npx glirastes install-skills --stack nestjs # NestJS standalone
npx glirastes install-skills --stack nextjs+nestjs # full-stack monorepo
# Different agent? Pass --target
npx glirastes install-skills --target .codex/skills
# Auto-update with the SDK on every npm install
npx glirastes install-skills --symlink
# Preview without writing
npx glirastes install-skills --dry-run
# Overwrite previously installed skills
npx glirastes install-skills --forceinstall-skills reads the bundled skills under node_modules/glirastes/skills/ and writes them to your repo. The --symlink mode keeps them auto-fresh — bumping glirastes and rerunning npm install updates the skill content without re-running the installer.
generate-skills(CLI command) ≠ these skills.generate-skillsexports your tools as skill files so external agents can call them. The skills underskills/instruct an agent on how to integrate Glirastes itself. Both ship in the npm package; they don't conflict.
License
Licensed under the Apache License, Version 2.0. You may use, modify, and distribute this SDK freely in commercial and non-commercial projects.
