@loopstack/tool-call-example-workflow
v0.20.5
Published
A simple workflow showing how to implement tool calling in an agentic loopstack workflow.
Downloads
1,002
Maintainers
Readme
@loopstack/tool-call-example-workflow
A module for the Loopstack AI automation framework.
This module provides an example workflow demonstrating how to enable LLM tool calling (function calling) with custom tools.
Overview
The Tool Call Example Workflow shows how to build agentic workflows where the LLM can invoke custom tools and receive their results. It demonstrates this by asking about the weather in Berlin, where the LLM calls a getWeather tool to fetch the information.
By using this workflow as a reference, you'll learn how to:
- Create custom tools that the LLM can invoke
- Pass tools to the LLM using the
toolsparameter - Use helper functions for conditional routing
- Handle tool call responses with
delegateToolCall - Access tool results via the
runtimeobject - Build agentic loops that continue until the LLM has a final answer
This example is essential for developers building AI agents that need to interact with external systems or APIs.
Installation
See SETUP.md for installation and setup instructions.
How It Works
Key Concepts
1. Creating Custom Tools
Define a tool using the @Tool decorator with a description and @Input for the argument schema:
@Tool({
config: {
description: 'Retrieve weather information.',
},
})
export class GetWeather implements ToolInterface {
@Input({
schema: z.object({
location: z.string(),
}),
})
async execute(): Promise<ToolResult> {
return Promise.resolve({
type: 'text',
data: 'Mostly sunny, 14C, rain in the afternoon.',
});
}
}The description in @Tool config is passed to the LLM to help it understand when to use the tool.
2. Registering Tools in the Workflow
Register custom tools using the @InjectTool() decorator:
@Workflow({
configFile: __dirname + '/tool-call.workflow.yaml',
})
export class ToolCallWorkflow {
@InjectTool() getWeather: GetWeather;
@InjectTool() aiGenerateText: AiGenerateText;
@InjectTool() delegateToolCall: DelegateToolCall;
// ...
}3. Passing Tools to the LLM
Provide tools to the LLM via the tools parameter. The tool call is given an id so its result can be referenced through the runtime object. Multiple calls within the same transition can reference earlier results:
- id: llm_turn
from: ready
to: prompt_executed
call:
- id: llm_call
tool: aiGenerateText
args:
llm:
provider: openai
model: gpt-4o
messagesSearchTag: message
tools:
- getWeather
- tool: createDocument
args:
id: ${{ runtime.tools.llm_turn.llm_call.data.id }}
document: aiMessageDocument
update:
content: ${{ runtime.tools.llm_turn.llm_call.data }}The LLM will decide whether to call a tool based on the user's request. The LLM response is immediately stored as a document using runtime.tools.llm_turn.llm_call.data.
4. Helper Functions for Routing
Define helper functions using the @DefineHelper() decorator for use in conditional expressions:
@DefineHelper()
isToolCall(message: { parts?: { type: string }[] } | null | undefined): boolean {
return message?.parts?.some((part) => part.type.startsWith('tool-')) ?? false;
}Use helpers in transition conditions, passing runtime references:
- id: route_with_tool_calls
from: prompt_executed
to: ready
if: '{{ isToolCall runtime.tools.llm_turn.llm_call.data }}'5. Executing Tool Calls
Use delegateToolCall to execute the tool the LLM requested. The result is stored via runtime and immediately saved as a document:
- id: route_with_tool_calls
from: prompt_executed
to: ready
if: '{{ isToolCall runtime.tools.llm_turn.llm_call.data }}'
call:
- id: delegate
tool: delegateToolCall
args:
message: ${{ runtime.tools.llm_turn.llm_call.data }}
- tool: createDocument
args:
id: ${{ runtime.tools.route_with_tool_calls.delegate.data.id }}
document: aiMessageDocument
update:
content: ${{ runtime.tools.route_with_tool_calls.delegate.data }}6. Runtime Type Declarations
The @Runtime() decorator provides typed access to tool results across transitions:
@Runtime()
runtime: {
tools: {
llm_turn: {
llm_call: AiMessageDocumentContentType;
};
route_with_tool_calls: {
delegate: AiMessageDocumentContentType;
};
};
};7. Agentic Loop Pattern
The workflow implements an agentic loop:
- LLM Turn - The LLM processes messages and may request a tool call
- Route with Tool Calls - If the LLM requested a tool, execute it and loop back
- Route without Tool Calls - If no tool call, the LLM has finished and the workflow ends
- id: route_with_tool_calls
from: prompt_executed
to: ready # Loop back for another LLM turn
if: '{{ isToolCall runtime.tools.llm_turn.llm_call.data }}'
- id: route_without_tool_calls
from: prompt_executed
to: end # Workflow completeThis pattern allows the LLM to make multiple tool calls before providing a final response.
Dependencies
This workflow uses the following Loopstack modules:
@loopstack/core- Core framework functionality@loopstack/core-ui-module- ProvidesCreateDocumenttool@loopstack/ai-module- ProvidesAiGenerateText,DelegateToolCalltools andAiMessageDocument
About
Author: Jakob Klippel
License: Apache-2.0
Additional Resources
- Loopstack Documentation
- Getting Started with Loopstack
- Find more Loopstack examples in the Loopstack Registry
