npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@output.ai/core

v0.1.8

Published

The core module of the output framework

Readme

Core

Provides tools to develop and run a workflow, which is a well defined logical unit of work.

Structure

Workflows are defined using core functions ("workflow", "step", "evaluator"), these are defined in separate files, and must be placed within the same folder:

└ workflows
  └ example
    ├ workflow.ts|js <- workflow entry point
    ├ steps.ts|js <- file containing steps used by the workflow
    ├ evaluators.ts|js <- file containing evaluating functions
    └ prompt.prompt <- a prompt file
  └ other-example

Workflows are the orchestrator and steps are executors. So the workflow only call the steps and the steps call the IO operations, like APIs, DBs, LLMs, etc. Evaluators are just another different flavor for steps, they work the same, but must return an EvaluationResult object.

Components

Workflow

The main code, must contain only deterministic orchestration code.

File: workflow.js

Example:

import { workflow, z } from '@output.ai/workflow';
import { guessByName } from './steps.js';

export default workflow( {
  name: 'guessMyProfession',
  description: 'Guess a person profession by its name',
  inputSchema: z.object( {
    name: z.string()
  } ),
  outputSchema: z.object( {
    profession: z.string()
  } ),
  fn: async input => {
    const profession = await guessByName( input.name );
    return { profession };
  }
})

Workflows can only import the following files

Components

  • evaluators.js
  • shared_steps.js
  • steps.js
  • workflow.js

Core library

  • @output.ai.core

Whitelisted files

  • types.js
  • consts.js
  • constants.js
  • vars.js
  • variables.js
  • utils.js
  • tools.js
  • functions.js
  • shared.js

Step

Re-usable units of work that can contain IO, used by the workflow.

File: steps.js

Example:

import { api } from './api.js'

export const guessByName = step( {
  name: 'guessByName',
  inputSchema: z.string(),
  outputSchema: z.string(),
  fn: async name => {
    const res = await api.consumer( name );
    return res.body;
  }
} )

Shared Steps

By default, steps are exclusive to the workflow, so it is not passible to use these steps from elsewhere. In order to have shared steps and make them accessible in different workflows, create a shared steps file. This file can be relatively imported anywhere.

File: shared_steps.js

Example:

export const mySharedStep = step( {
  name: 'mySharedStep',
  ...
} )

And the usage is the same as any step: workflow.js

import { mySharedStep } from '../../tools/shared_steps.js'

Evaluators

Steps that analyze LLM response, or take other measurements are contained in evaluators.

File: evaluators.js

Example:

import { evaluator, EvaluationStringResult } from './api.js'

export const judgeResult = evaluator( {
  name: 'judgeResult',
  inputSchema: z.string(),
  fn: async name => {
    ...
    return new EvaluationStringResult({
      value: 'good',
      confidence: .95
    });
  }
} )

Its usage is the same as steps: workflow.js

import { workflow, z } from '@output.ai/workflow';
import { judgeResult } from './evaluators.js';

export default workflow( {
  name: 'guessMyProfession',
  inputSchema: z.object( {
    name: z.string()
  } ),
  outputSchema: z.object( {
    result: z.string()
  } ),
  fn: async input => {
    const judgment = await judgeResult( input.name );
    return { result: judgement.value };
  }
})

Webhooks

Workflows can call webhooks that will stop their execution until an answer is given back.

import { workflow, createWebhook } from '@output.ai/workflow';
import { guessByName } from './steps.js';

export default workflow( {
  ...
  fn: async input => {
    ...

    const result = await createWebhook( {
      url: 'http://xxx.xxx/feedback',
      payload: {
        progressSoFar: 'plenty'
      }
    } );

  }
})

The url of the example will receive the payload, plus the workflowId:

{
  workflowId: '', // alphanumerical id of the workflow execution,
  payload: { }, // the payload sent using tools.webhook()
}

To resume the workflow, a POST has to be made with a response payload and the workflowId.

  • Production: https://output-api-production.onrender.com/workflow/feedback
  • Local: http://localhost:3001/workflow/feedback

Example:

POST http://locahost:3001/workflow/feedback
  {
    workflowId,
    payload: {}
  }

Options

All core interface functions: workflow, step, evaluator have similar signature, with the following options:

  • name: The function name, used to call it internally and identify it in the trace files, must be a code friendly string;

  • description: Human description of the workflow/step, used for the catalog;

  • inputSchema: a zod object indicating the type of the argument received by the fn function. It is validated. Omit if it doesn't have input arguments;

  • outputSchema: a zod object indicating the type of that the fn function returns. It is validated. Omit if it is void. Evaluators do not have this option, since they must always return an EvaluationResult object;

  • fn: The actual implementation of the workflow/step, including all its logic.

  • options: Advanced options that will overwrite Temporal's ActivityOptions when calling activities.

    If used on workflow() it will apply for all activities. If used on step() or evaluator() it will apply only to that underlying activity. If changed in both places, the end value will be a merge between the initial values, workflow values and the step values.

    Order of precedence step options > workflow options > default options

Developing

To develop workflows you need the code, which will be called the worker, the API and the engine (Temporal).

After having the API and the engine running, to start the worker just run:

`npm run outputai`

Env variables

Necessary env variables to run the worker locally:

  • TEMPORAL_ADDRESS: The temporal backend address, prefer the remote;
  • TEMPORAL_NAMESPACE: The name of the namespace, if using remote, use: "output-production.i0jzq";
  • TEMPORAL_API_KEY: The API key to access remote temporal. If using local temporal, leave it blank;
  • CATALOG_ID: The name of the local catalog, always set this. Use your email;
  • API_AUTH_KEY: The API key to access the Framework API. Local can be blank, remote use the proper API Key;
  • TRACE_LOCAL_ON: A "stringbool" value indicating if traces should be saved locally, needs REDIS_URL;
  • TRACE_REMOTE_ON: A "stringbool" value indicating if traces should be saved remotely, needs REDIS_URL and AWS_* secrets;
  • REDIS_URL: The redis address to connect. Only necessary when any type of trace is enabled;
  • TRACE_REMOTE_S3_BUCKET: The AWS S3 bucket to send the traces. Only necessary when remote trace is enabled;
  • AWS_REGION: AWS region to connect to send the traces, must match the bucket region. Only necessary when remote trace is enabled;
  • AWS_ACCESS_KEY_ID: AWS key id. Only necessary when remote trace is enabled;
  • AWS_SECRET_ACCESS_KEY: AWS secrete. Only necessary when remote trace is enabled;