npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@loopstack/tool-call-example-workflow

v0.20.5

Published

A simple workflow showing how to implement tool calling in an agentic loopstack workflow.

Downloads

1,002

Readme

@loopstack/tool-call-example-workflow

A module for the Loopstack AI automation framework.

This module provides an example workflow demonstrating how to enable LLM tool calling (function calling) with custom tools.

Overview

The Tool Call Example Workflow shows how to build agentic workflows where the LLM can invoke custom tools and receive their results. It demonstrates this by asking about the weather in Berlin, where the LLM calls a getWeather tool to fetch the information.

By using this workflow as a reference, you'll learn how to:

  • Create custom tools that the LLM can invoke
  • Pass tools to the LLM using the tools parameter
  • Use helper functions for conditional routing
  • Handle tool call responses with delegateToolCall
  • Access tool results via the runtime object
  • Build agentic loops that continue until the LLM has a final answer

This example is essential for developers building AI agents that need to interact with external systems or APIs.

Installation

See SETUP.md for installation and setup instructions.

How It Works

Key Concepts

1. Creating Custom Tools

Define a tool using the @Tool decorator with a description and @Input for the argument schema:

@Tool({
  config: {
    description: 'Retrieve weather information.',
  },
})
export class GetWeather implements ToolInterface {
  @Input({
    schema: z.object({
      location: z.string(),
    }),
  })
  async execute(): Promise<ToolResult> {
    return Promise.resolve({
      type: 'text',
      data: 'Mostly sunny, 14C, rain in the afternoon.',
    });
  }
}

The description in @Tool config is passed to the LLM to help it understand when to use the tool.

2. Registering Tools in the Workflow

Register custom tools using the @InjectTool() decorator:

@Workflow({
  configFile: __dirname + '/tool-call.workflow.yaml',
})
export class ToolCallWorkflow {
  @InjectTool() getWeather: GetWeather;
  @InjectTool() aiGenerateText: AiGenerateText;
  @InjectTool() delegateToolCall: DelegateToolCall;
  // ...
}

3. Passing Tools to the LLM

Provide tools to the LLM via the tools parameter. The tool call is given an id so its result can be referenced through the runtime object. Multiple calls within the same transition can reference earlier results:

- id: llm_turn
  from: ready
  to: prompt_executed
  call:
    - id: llm_call
      tool: aiGenerateText
      args:
        llm:
          provider: openai
          model: gpt-4o
        messagesSearchTag: message
        tools:
          - getWeather

    - tool: createDocument
      args:
        id: ${{ runtime.tools.llm_turn.llm_call.data.id }}
        document: aiMessageDocument
        update:
          content: ${{ runtime.tools.llm_turn.llm_call.data }}

The LLM will decide whether to call a tool based on the user's request. The LLM response is immediately stored as a document using runtime.tools.llm_turn.llm_call.data.

4. Helper Functions for Routing

Define helper functions using the @DefineHelper() decorator for use in conditional expressions:

@DefineHelper()
isToolCall(message: { parts?: { type: string }[] } | null | undefined): boolean {
  return message?.parts?.some((part) => part.type.startsWith('tool-')) ?? false;
}

Use helpers in transition conditions, passing runtime references:

- id: route_with_tool_calls
  from: prompt_executed
  to: ready
  if: '{{ isToolCall runtime.tools.llm_turn.llm_call.data }}'

5. Executing Tool Calls

Use delegateToolCall to execute the tool the LLM requested. The result is stored via runtime and immediately saved as a document:

- id: route_with_tool_calls
  from: prompt_executed
  to: ready
  if: '{{ isToolCall runtime.tools.llm_turn.llm_call.data }}'
  call:
    - id: delegate
      tool: delegateToolCall
      args:
        message: ${{ runtime.tools.llm_turn.llm_call.data }}

    - tool: createDocument
      args:
        id: ${{ runtime.tools.route_with_tool_calls.delegate.data.id }}
        document: aiMessageDocument
        update:
          content: ${{ runtime.tools.route_with_tool_calls.delegate.data }}

6. Runtime Type Declarations

The @Runtime() decorator provides typed access to tool results across transitions:

@Runtime()
runtime: {
  tools: {
    llm_turn: {
      llm_call: AiMessageDocumentContentType;
    };
    route_with_tool_calls: {
      delegate: AiMessageDocumentContentType;
    };
  };
};

7. Agentic Loop Pattern

The workflow implements an agentic loop:

  1. LLM Turn - The LLM processes messages and may request a tool call
  2. Route with Tool Calls - If the LLM requested a tool, execute it and loop back
  3. Route without Tool Calls - If no tool call, the LLM has finished and the workflow ends
- id: route_with_tool_calls
  from: prompt_executed
  to: ready # Loop back for another LLM turn
  if: '{{ isToolCall runtime.tools.llm_turn.llm_call.data }}'

- id: route_without_tool_calls
  from: prompt_executed
  to: end # Workflow complete

This pattern allows the LLM to make multiple tool calls before providing a final response.

Dependencies

This workflow uses the following Loopstack modules:

  • @loopstack/core - Core framework functionality
  • @loopstack/core-ui-module - Provides CreateDocument tool
  • @loopstack/ai-module - Provides AiGenerateText, DelegateToolCall tools and AiMessageDocument

About

Author: Jakob Klippel

License: Apache-2.0

Additional Resources