@loopstack/prompt-structured-output-example-workflow
v0.20.6
Published
A simple workflow showing how to work with structured LLM output in Loopstack.
Maintainers
Readme
@loopstack/prompt-structured-output-example-workflow
A module for the Loopstack AI automation framework.
This module provides an example workflow demonstrating how to generate structured output from an LLM using a custom document schema.
Overview
The Prompt Structured Output Example Workflow shows how to use the aiGenerateDocument tool to get structured, typed responses from an LLM. It generates a "Hello, World!" script in a user-selected programming language, with the response structured into filename, description, and code fields.
By using this workflow as a reference, you'll learn how to:
- Define custom document schemas with Zod for structured LLM output
- Use the
aiGenerateDocumenttool to generate typed responses - Create custom documents with form configuration
- Access structured results via the
runtimeobject
This example builds on the basic prompt pattern and is ideal for developers who need typed, structured responses from LLMs.
Installation
See SETUP.md for installation and setup instructions.
How It Works
Key Concepts
1. Custom Document Schema
Define a Zod schema for the structured output:
export const FileDocumentSchema = z
.object({
filename: z.string(),
description: z.string(),
code: z.string(),
})
.strict();
export type FileDocumentType = z.infer<typeof FileDocumentSchema>;Create a document class that uses this schema with the @Document decorator and @Input for the content:
@Document({
configFile: __dirname + '/file-document.yaml',
})
export class FileDocument implements DocumentInterface {
@Input({ schema: FileDocumentSchema })
content: FileDocumentType;
}2. Document UI Configuration
Configure how the document is displayed in the UI:
type: document
ui:
form:
order:
- filename
- description
- code
properties:
filename:
title: File Name
readonly: true
description:
title: Description
readonly: true
code:
title: Code
widget: code-view3. Enum Arguments with Select Widget
Use Zod enums to provide a dropdown selection in the UI:
@Input({
schema: z.object({
language: z.enum(['python', 'javascript', 'java', 'cpp', 'ruby', 'go', 'php']).default('python'),
}),
})
args: {
language: string;
};Configure the select widget in YAML:
ui:
form:
properties:
language:
title: 'What programming language should the script be in?'
widget: select4. Generating Structured Output
Use aiGenerateDocument with a response.document to get typed output. The tool call is given an id so its result can be referenced later:
- id: prompt
from: ready
to: prompt_executed
call:
- id: llm_call
tool: aiGenerateDocument
args:
llm:
provider: openai
model: gpt-4o
response:
document: fileDocument
prompt: |
Create a {{ args.language }} script that prints 'Hello, World!' to the console.
Wrap the code in triple-backticks.The LLM response is automatically parsed into the FileDocument schema.
5. Accessing Results via Runtime
Instead of using assign to save results to workflow state, tool results are accessed through the runtime object. The path follows the pattern runtime.tools.<transitionId>.<toolCallId>.data:
- id: add_message
from: prompt_executed
to: end
call:
- tool: createDocument
args:
id: status
document: aiMessageDocument
update:
content:
role: assistant
parts:
- type: text
text: |
Successfully generated: {{ runtime.tools.prompt.llm_call.data.content.description }}The TypeScript class declares the runtime types with the @Runtime() decorator:
@Runtime()
runtime: {
tools: Record<'prompt', Record<'llm_call', ToolResult<DocumentEntity<FileDocumentType>>>>;
};Dependencies
This workflow uses the following Loopstack modules:
@loopstack/core- Core framework functionality@loopstack/core-ui-module- ProvidesCreateDocumenttool@loopstack/ai-module- ProvidesAiGenerateDocumenttool andAiMessageDocument
About
Author: Jakob Klippel
License: Apache-2.0
Additional Resources
- Loopstack Documentation
- Getting Started with Loopstack
- Find more Loopstack examples in the Loopstack Registry
