@output.ai/core
v0.1.8
Published
The core module of the output framework
Readme
Core
Provides tools to develop and run a workflow, which is a well defined logical unit of work.
Structure
Workflows are defined using core functions ("workflow", "step", "evaluator"), these are defined in separate files, and must be placed within the same folder:
└ workflows
└ example
├ workflow.ts|js <- workflow entry point
├ steps.ts|js <- file containing steps used by the workflow
├ evaluators.ts|js <- file containing evaluating functions
└ prompt.prompt <- a prompt file
└ other-example
Workflows are the orchestrator and steps are executors. So the workflow only call the steps and the steps call the IO operations, like APIs, DBs, LLMs, etc. Evaluators are just another different flavor for steps, they work the same, but must return an EvaluationResult object.
Components
Workflow
The main code, must contain only deterministic orchestration code.
File: workflow.js
Example:
import { workflow, z } from '@output.ai/workflow';
import { guessByName } from './steps.js';
export default workflow( {
name: 'guessMyProfession',
description: 'Guess a person profession by its name',
inputSchema: z.object( {
name: z.string()
} ),
outputSchema: z.object( {
profession: z.string()
} ),
fn: async input => {
const profession = await guessByName( input.name );
return { profession };
}
})Workflows can only import the following files
Components
evaluators.jsshared_steps.jssteps.jsworkflow.js
Core library
@output.ai.core
Whitelisted files
types.jsconsts.jsconstants.jsvars.jsvariables.jsutils.jstools.jsfunctions.jsshared.js
Step
Re-usable units of work that can contain IO, used by the workflow.
File: steps.js
Example:
import { api } from './api.js'
export const guessByName = step( {
name: 'guessByName',
inputSchema: z.string(),
outputSchema: z.string(),
fn: async name => {
const res = await api.consumer( name );
return res.body;
}
} )Shared Steps
By default, steps are exclusive to the workflow, so it is not passible to use these steps from elsewhere. In order to have shared steps and make them accessible in different workflows, create a shared steps file. This file can be relatively imported anywhere.
File: shared_steps.js
Example:
export const mySharedStep = step( {
name: 'mySharedStep',
...
} )And the usage is the same as any step:
workflow.js
import { mySharedStep } from '../../tools/shared_steps.js'Evaluators
Steps that analyze LLM response, or take other measurements are contained in evaluators.
File: evaluators.js
Example:
import { evaluator, EvaluationStringResult } from './api.js'
export const judgeResult = evaluator( {
name: 'judgeResult',
inputSchema: z.string(),
fn: async name => {
...
return new EvaluationStringResult({
value: 'good',
confidence: .95
});
}
} )Its usage is the same as steps:
workflow.js
import { workflow, z } from '@output.ai/workflow';
import { judgeResult } from './evaluators.js';
export default workflow( {
name: 'guessMyProfession',
inputSchema: z.object( {
name: z.string()
} ),
outputSchema: z.object( {
result: z.string()
} ),
fn: async input => {
const judgment = await judgeResult( input.name );
return { result: judgement.value };
}
})Webhooks
Workflows can call webhooks that will stop their execution until an answer is given back.
import { workflow, createWebhook } from '@output.ai/workflow';
import { guessByName } from './steps.js';
export default workflow( {
...
fn: async input => {
...
const result = await createWebhook( {
url: 'http://xxx.xxx/feedback',
payload: {
progressSoFar: 'plenty'
}
} );
}
})The url of the example will receive the payload, plus the workflowId:
{
workflowId: '', // alphanumerical id of the workflow execution,
payload: { }, // the payload sent using tools.webhook()
}To resume the workflow, a POST has to be made with a response payload and the workflowId.
- Production:
https://output-api-production.onrender.com/workflow/feedback - Local:
http://localhost:3001/workflow/feedback
Example:
POST http://locahost:3001/workflow/feedback
{
workflowId,
payload: {}
}Options
All core interface functions: workflow, step, evaluator have similar signature, with the following options:
name: The function name, used to call it internally and identify it in the trace files, must be a code friendly string;
description: Human description of the workflow/step, used for the catalog;
inputSchema: a zod object indicating the type of the argument received by the
fnfunction. It is validated. Omit if it doesn't have input arguments;outputSchema: a zod object indicating the type of that the
fnfunction returns. It is validated. Omit if it is void. Evaluators do not have this option, since they must always return an EvaluationResult object;fn: The actual implementation of the workflow/step, including all its logic.
options: Advanced options that will overwrite Temporal's ActivityOptions when calling activities.
If used on
workflow()it will apply for all activities. If used onstep()orevaluator()it will apply only to that underlying activity. If changed in both places, the end value will be a merge between the initial values, workflow values and the step values.Order of precedence
step options > workflow options > default options
Developing
To develop workflows you need the code, which will be called the worker, the API and the engine (Temporal).
After having the API and the engine running, to start the worker just run:
`npm run outputai`Env variables
Necessary env variables to run the worker locally:
TEMPORAL_ADDRESS: The temporal backend address, prefer the remote;TEMPORAL_NAMESPACE: The name of the namespace, if using remote, use: "output-production.i0jzq";TEMPORAL_API_KEY: The API key to access remote temporal. If using local temporal, leave it blank;CATALOG_ID: The name of the local catalog, always set this. Use your email;API_AUTH_KEY: The API key to access the Framework API. Local can be blank, remote use the proper API Key;TRACE_LOCAL_ON: A "stringbool" value indicating if traces should be saved locally, needs REDIS_URL;TRACE_REMOTE_ON: A "stringbool" value indicating if traces should be saved remotely, needs REDIS_URL and AWS_* secrets;REDIS_URL: The redis address to connect. Only necessary when any type of trace is enabled;TRACE_REMOTE_S3_BUCKET: The AWS S3 bucket to send the traces. Only necessary when remote trace is enabled;AWS_REGION: AWS region to connect to send the traces, must match the bucket region. Only necessary when remote trace is enabled;AWS_ACCESS_KEY_ID: AWS key id. Only necessary when remote trace is enabled;AWS_SECRET_ACCESS_KEY: AWS secrete. Only necessary when remote trace is enabled;
