@ideascol/agents-generator-sdk
v0.8.1
Published
agents-generator-sdk
Downloads
440
Readme
@ideascol/agents-generator-sdk
agents-generator-sdk
Usage as library
Basic Usage - Get Agents
import { useEffect, useState } from 'react';
import {
AgentClient,
MessageCreate,
} from '@ideascol/agents-generator-sdk';
export function Agents() {
const [agents, setAgents] = useState<AgentRequest[]>([]);
useEffect(() => {
const client = new AgentClient({
apiUrl: 'https://api.agentsgenerator.dev',
apiToken: 'YOUR_API_TOKEN',
})
client
.agent.getAgents(0, 100, "anonymus")
.then((result: AgentRequest[]) => {
setAgents(result);
});
}, []);
return (
<div>
<h1>Agents</h1>
{agents.map((agent) => (
<div key={agent.name}>{agent.name}</div>
))}
</div>
);
}Send Message with Transfer Detection (Synchronous)
The sendMessage method now uses streaming internally to detect agent transfers (handoffs). It returns a promise with the complete response including any transfers that occurred.
import { AgentClient, SendMessageResponse } from '@ideascol/agents-generator-sdk';
const client = new AgentClient({
apiUrl: 'https://api.agentsgenerator.dev',
apiToken: 'YOUR_API_TOKEN',
});
// Send a message and get the complete response with transfers
const response: SendMessageResponse = await client.sendMessage(
'conversation-id',
{ content: 'Hello, I need help with billing' }
);
console.log('Message:', response.message);
console.log('Message ID:', response.message_id);
// Check if there were any transfers
if (response.transfers && response.transfers.length > 0) {
console.log('Transfers detected:');
response.transfers.forEach(transfer => {
console.log(` From: ${transfer.from_agent}`);
console.log(` To: ${transfer.to_agent}`);
console.log(` Reason: ${transfer.reason}`);
});
}Send Message with Streaming (Real-time updates)
For real-time updates and custom handling of events, use sendMessageWithStreaming:
import { AgentClient, TransferEvent } from '@ideascol/agents-generator-sdk';
const client = new AgentClient({
apiUrl: 'https://api.agentsgenerator.dev',
apiToken: 'YOUR_API_TOKEN',
});
const abort = client.sendMessageWithStreaming(
'conversation-id',
{ content: 'Hello!' },
{
onMessage: (content: string) => {
console.log('Message update:', content);
},
onDone: (messageId: string) => {
console.log('Message completed:', messageId);
},
onTransfer: (transfer: TransferEvent) => {
console.log('Transfer detected!');
console.log(` From: ${transfer.from_agent}`);
console.log(` To: ${transfer.to_agent}`);
console.log(` Reason: ${transfer.reason}`);
},
onError: (error: Error) => {
console.error('Error:', error);
}
}
);
// To abort the stream:
// abort();TypeScript Types
interface SendMessageResponse {
status: string;
message: string;
message_id?: string;
transfers?: TransferEvent[];
}
interface TransferEvent {
from_agent?: string;
to_agent?: string;
reason?: string;
}
interface StreamCallbacks {
onMessage?: (content: string) => void;
onDone?: (messageId: string) => void;
onError?: (error: Error) => void;
onTransfer?: (transfer: TransferEvent) => void;
}Inline agents (Genkit-style, no persistence)
Define an agent in code and execute it without persisting the configuration on the platform. The full agent definition travels in each request.
Two distinct credentials — do not confuse them:
apiToken(constructor) — platform auth. A Keycloak JWT or a workspace API key (ack_*). Optional. When omitted, the SDK runs in anonymous mode and the backend skips workspace context.apiKey(defineAgent) — model-provider credential (OpenAI key, etc.). Travels in the request body asapi_key. Required in anonymous mode.
Anonymous mode rejects: credentialId references (agent or per-node) and
workspaceAgentTool nodes — both need workspace scoping.
Minimal — anonymous, no tools
import { AgentClient } from '@ideascol/agents-generator-sdk';
const client = new AgentClient({
apiUrl: 'http://localhost:8000',
// no apiToken → anonymous mode
});
const agent = client.defineAgent({
name: 'haiku-bot',
instructions: 'Write short, original haikus on the topic the user gives.',
modelProvider: 'openai', // autocompletes 75 providers
modelName: 'gpt-5-mini', // autocompletes 2050 models
apiKey: process.env.OPENAI_API_KEY,
});
const r = await agent.run('the sea at dawn');
console.log(r.message);With local function tools
The SDK runs the callback-tool loop transparently: it executes your local handler when the model calls the tool and posts the result back to continue the same logical turn.
const agent = client.defineAgent({
name: 'weather',
instructions: 'Use get_weather to answer questions.',
modelProvider: 'openai',
modelName: 'gpt-5-mini',
apiKey: process.env.OPENAI_API_KEY,
maxToolIterations: 10, // cap on tool round-trips per run
functions: [
{
name: 'get_weather',
description: 'Return current weather for a city.',
parameters: {
type: 'object',
properties: { city: { type: 'string' } },
required: ['city'],
},
execute: async ({ city }: { city: string }) => ({
city,
temp_c: 22,
conditions: 'sunny',
}),
},
],
});
const r = await agent.run('Weather in Bogota?');
console.log(r.message);
console.log('tool calls:', r.tool_results?.length);Stateless with prior history
The SDK replays full history on each request — fine for short sessions, prefer ephemeral conversations beyond ~20 turns.
import { InlineMessage } from '@ideascol/agents-generator-sdk';
const history: InlineMessage[] = [
{ role: 'user', content: 'My name is Jairo.' },
{ role: 'assistant', content: 'Nice to meet you, Jairo.' },
];
const r = await agent.run("What's my name?", { history });SSE streaming — callbacks
let prev = 0;
await agent.run('write a 6-sentence story about a robot in a forest', {
callbacks: {
onMessage: (content) => {
process.stdout.write(content.slice(prev));
prev = content.length;
},
onToolCall: (tc) => console.log('\n[tool]', tc.name),
onDone: () => console.log('\n[done]'),
onError: (err) => console.error(err.message),
},
});SSE streaming — AsyncIterable
let prev = '';
for await (const ev of agent.runStream('tell me a joke')) {
if (ev.type === 'message' && typeof (ev as any).content === 'string') {
const c = (ev as any).content as string;
process.stdout.write(c.slice(prev.length));
prev = c;
} else if (ev.type === 'done') {
break;
}
}Ephemeral conversation (multi-turn, server-tracked, TTL)
The backend keeps state for the session under a TTL'd row. No need to replay history on each turn. Send messages via the returned handle.
const conv = await agent.startConversation({ ttlSeconds: 600 });
console.log('conversation_id:', conv.conversation_id);
await conv.send('My favorite color is teal.');
const r2 = await conv.send("What's my favorite color?");
console.log(r2.message); // → "Your favorite color is teal."
await conv.close(); // best-effort DELETEMulti-agent composition (handoffs)
Compose specialist agents and a router via subAgents. Each child is
itself a defineAgent handle — so you can nest arbitrarily deep, and
the SDK flattens the whole tree into a multi-node graph at run time.
The backend wires each parent→child edge as a handoff.
const billing = client.defineAgent({
name: 'billing',
instructions: 'You handle billing: invoices, refunds, payments.',
modelProvider: 'openai',
modelName: 'gpt-5-mini',
apiKey: process.env.OPENAI_API_KEY,
});
const tech = client.defineAgent({
name: 'tech',
instructions: 'You handle technical issues: outages, errors, integrations.',
modelProvider: 'openai',
modelName: 'gpt-5-mini',
apiKey: process.env.OPENAI_API_KEY,
});
const triage = client.defineAgent({
name: 'triage',
instructions:
'Route the user to the best specialist (billing or tech). ' +
'Do not answer yourself — perform a handoff.',
modelProvider: 'openai',
modelName: 'gpt-5-mini',
apiKey: process.env.OPENAI_API_KEY,
subAgents: [billing, tech], // nested agents
});
await triage.run('My invoice for March looks wrong.'); // → handoff to billing
await triage.run('My API returns 500 errors.'); // → handoff to tech
// Inspect the flattened graph (debug):
console.log(triage.definition.nodes!.length); // 3
console.log(triage.definition.edges!.length); // 2Nesting is recursive — a sub-agent can define its own subAgents:
const escalate = client.defineAgent({ name: 'l3', /* ... */ });
const tech = client.defineAgent({
name: 'l2',
/* ... */,
subAgents: [escalate],
});
const root = client.defineAgent({
name: 'l1',
/* ... */,
subAgents: [tech],
});
// root.definition.nodes.length === 3, edges.length === 2 (l1→l2, l2→l3)Authenticated mode (workspace credential reference)
Pass an ack_* workspace API key as apiToken to unlock workspace-scoped
features: stored credentials, workspace-agent tools, token-usage tracking.
const client = new AgentClient({
apiUrl: 'https://api.agentsgenerator.dev',
apiToken: 'ack_xxx', // create one in the platform UI → Workspace → API Keys
});
const agent = client.defineAgent({
name: 'support',
instructions: '...',
modelProvider: 'openai',
modelName: 'gpt-5-mini',
credentialId: 'cred-uuid', // resolves against the workspace
});
await agent.run('hello');Typed model catalog
ModelProvider / ModelName literal unions provide autocomplete on
modelProvider and modelName. Custom strings still allowed (forward-compat).
Snapshot is regeneratable from the live backend:
import {
MODELS_BY_PROVIDER,
MODEL_INFO,
type ModelProvider,
type ModelName,
} from '@ideascol/agents-generator-sdk';
console.log(MODELS_BY_PROVIDER.anthropic); // ["claude-...", ...]
console.log(MODEL_INFO['gpt-5-mini'].max_input_tokens); // numberRegenerate the snapshot:
API_URL=https://api.agentsgenerator.dev bun run generate:models
# or against a local backend:
API_URL=http://localhost:8000 bun run generate:models
# filters:
MODELS_MODE=chat MODELS_FN_CALLING=true bun run generate:modelsQuick start
# Using npm
npx @ideascol/agents-generator-sdk@latest
# Using bun
bunx @ideascol/agents-generator-sdk@latestInstallation
# Using npm
npm install -g @ideascol/agents-generator-sdk@latest
# Using bun
bun install -g @ideascol/agents-generator-sdk@latestUsage as cli
npx @ideascol/agents-generator-sdk@latest version --apiToken=1232 --URL=https://api.agentsgenerator.dev{
"status": "ok",
"api_version": "dbd7d9ca8a6b1e4622ef409e26cd8addb650e95f",
"api_branch": "main",
"api_date": "2025-04-25 17:21:01"
}Update lib client
npx openapi-typescript-codegen \
--input https://api.agentsgenerator.dev/openapi.json \
--output src/lib/clients/agents-generator \
--client fetchnpx openapi-typescript-codegen \
--input http://localhost:8000/openapi.json \
--output src/lib/clients/agents-generator \
--client fetchcurl -s http://localhost:8000/openapi.json -o openapi.json