npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

llamada

v0.0.3

Published

Compose LLMs as functions (TypeScript)

Downloads

14

Readme

llamada.ts

See llamada/README.md for project information (rationale, design goals, etc.).

Installation

npm install llamada

Example 1: Quickstart

export OPENAI_API_KEY="sk-..."
import {
    defineWithPrompt,
    setFallbackModel,
    createModelReference,
} from "llamada";

// Define a function with a prompt
const detectIntention = defineWithPrompt(
    (review: string) => `
    You are a sentiment analysis tool. Classify the review as a complaint or applause.
    Review: ${review}.`,
);

async function main() {
    // Necessary because prompt-based functions need an LLM model to execute
    setFallbackModel(
        createModelReference({
            provider: "openai",
            model: "gpt-5",
        }),
    );

    // Use function defined with prompt just like an ordinary function
    const intention = await detectIntention(
        "The product was great, I loved it!",
    );
    // {"result": "applause"} | {"result": "complaint"}
    console.info("Intention:", intention);
}

main();

Example 2: Typed return

export OPENAI_API_KEY="sk-..."
import { z } from "zod"; // Use Zod for type-safe schemas

import {
    defineWithPrompt,
    setFallbackModel,
    createModelReference,
} from "llamada";

// Define a function with a prompt
const detectIntention = defineWithPrompt(
    (review: string) => `
    You are a sentiment analysis tool. Classify the review as a complaint or applause.
    Review: ${review}.`,
    z.enum(["complaint", "applause"]), // Define the return type schema
);

async function main() {
    // Necessary because prompt-based functions need an LLM model to execute
    setFallbackModel(
        createModelReference({
            provider: "openai",
            model: "gpt-5",
        }),
    );

    const intention = await detectIntention(
        "The product was great, I loved it!",
    );
    if (intention === "complaint") {
        console.log("It's a complaint!");
    } else if (intention === "applause") {
        console.log("It's an applause!");
    }
}

main();

Example 3: Explicit model binding

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-..."
export GEMINI_API_KEY="XXX..."
import { z } from "zod";

import { defineWithPrompt, createModelReference } from "llamada";

// Define a function with a prompt
const detectIntention = defineWithPrompt(
    (review: string) => `
    You are a sentiment analysis tool. Classify the review as a complaint or applause.
    Review: ${review}.`,
    z.enum(["complaint", "applause"]),
);

async function main() {
    const gpt5 = createModelReference({
        provider: "openai",
        model: "gpt-5",
        params: {
            reasoning: {
                effort: "high",
            },
        },
    });

    const sonnet45 = createModelReference({
        provider: "anthropic",
        model: "claude-sonnet-4-5",
        max_tokens: 1024,
    });

    const pro25 = createModelReference({
        provider: "google-gemini",
        model: "gemini-2.5-pro",
    });

    const detectIntentionWithGpt5 = detectIntention.bindToModel(gpt5);
    const detectIntentionWithSonnet45 = detectIntention.bindToModel(sonnet45);
    const detectIntentionWithPro25 = detectIntention.bindToModel(pro25);

    // Use function defined with prompt just like an ordinary function
    const comment = "The product was great, I loved it!";
    const intentionGuesses = await Promise.all([
        detectIntentionWithGpt5(comment),
        detectIntentionWithSonnet45(comment),
        detectIntentionWithPro25(comment),
    ]);
    if (intentionGuesses.every((intention) => intention === "complaint")) {
        console.log("It's definitely a complaint!");
    } else if (
        intentionGuesses.every((intention) => intention === "applause")
    ) {
        console.log("It's definitely an applause!");
    } else {
        console.log("Ambiguous intention detected.");
    }
}

main();

Example 4: Bulk model binding with zipbind

import { z } from "zod";

import { defineWithPrompt, createModelReference, zipbind } from "llamada";

const detectIntention = defineWithPrompt(
    (review: string) => `
    You are a sentiment analysis tool. Classify the review as a complaint or applause.
    Review: ${review}.`,
    z.enum(["complaint", "applause"]),
);

const summarizeReview = defineWithPrompt(
    (review: string) => `
    Summarize this review in a single upbeat sentence.
    Review: ${review}.`,
    z.string(),
);

const gpt5 = createModelReference({
    provider: "openai",
    model: "gpt-5",
});

const sonnet45 = createModelReference({
    provider: "anthropic",
    model: "claude-sonnet-4-5",
});

async function main() {
    const functions = zipbind(
        { detectIntention, summarizeReview },
        {
            detectIntention: gpt5,
            summarizeReview: sonnet45,
        },
    );

    const intention = await functions.detectIntention(
        "The product was great, I loved it!",
    );
    const summary = await functions.summarizeReview(
        "Even though the box arrived dented, the device works flawlessly and support was fantastic.",
    );

    console.log("Intention:", intention);
    console.log("Summary:", summary);
}

main();

Example 5: A mini multi-step multi-agent concurrent agentic system interacting with environment (a curator for Hacker News top stories)

import { createModelReference, defineWithPrompt } from "..";
import z from "zod";

async function fetchHackerNews(): Promise<{ url: string; html: string }[]> {
    const index = await fetch(
        "https://hacker-news.firebaseio.com/v0/topstories.json",
    ).then((res) => res.json() as Promise<number[]>);
    const top10 = index.slice(0, 5);
    const articles = await Promise.all(
        top10.map(async (id) => {
            const item = await fetch(
                `https://hacker-news.firebaseio.com/v0/item/${id}.json`,
            ).then((res) => res.json() as Promise<{ url: string }>);
            if (!item.url) {
                return { url: "", html: "" };
            }
            const html = await fetch(item.url).then((res) => res.text());
            return { url: item.url, html: html.slice(0, 20_000) };
        }),
    );
    return articles;
}

const editorMessageSchema = z.discriminatedUnion("type", [
    z.object({
        type: z.literal("final"),
        newsletter: z.string(),
    }),
    z.object({
        type: z.literal("fetch"),
        urls: z.string().array(),
    }),
    z.object({
        type: z.literal("fetch-results"),
        results: z
            .object({
                url: z.string(),
                content: z.string(),
            })
            .array(),
    }),
    z.object({
        type: z.literal("consult"),
        id: z.string(),
        comments: z.string(),
    }),
    z.object({
        type: z.literal("consult-feedback"),
        respondToId: z.string(),
        feedback: z.string(),
    }),
    z.object({
        type: z.literal("consider"),
        consideration: z.string(),
    }),
]);

type EditorStep = z.infer<typeof editorMessageSchema>;

interface EditorState {
    today: string;
    articles: { url: string; html: string }[];
    steps: EditorStep[];
}

const editor = defineWithPrompt((state: EditorState) => {
    const coreInstructions = `
<instructions>
You are as an experienced software developer who is editing a newsletter, "Tech Today".

Your task is to review the recommendations provided by various curators and compile them into a concise and engaging newsletter format.

You will receive:
- Recent Hacker News postings
- Previous messages sent by you, the chief editor and support systems.

You will determine the next step of action.

After review of the current state of process, you will choose one of the following messages to send:
- "fetch": when you need more information about an URL. You need to provide a list of URLs to fetch in the "urls" field.
- "consider": when you have received new fetched results and will consider them in your next steps. You should at least consider once.
- "consult": when you want a second opinion from a senior editor. You will provide your comments. The reviewer will have all the information you have. You should consult at least once.
- "final": when you have enough information to compile the final newsletter. You will provide the complete text of the newsletter in the "newsletter" field.
  - The newsletter should use markdown formatting.

Here are messages you may receive:
- "fetch-results": when you have previously requested URLs to be fetched, you will receive the results of the fetch in the "results" field, containing the URL and the fetched content.

</instructions>
`;

    return `
${coreInstructions}
<state format="json">
${JSON.stringify(state, null, 2)}
</state>
${coreInstructions}
`;
}, editorMessageSchema);

const reviewer = defineWithPrompt(
    (input: { state: EditorState; editorComments: string }) => {
        const coreInstructions = `
<instructions>
You're a senior editor assisting the chief editor of a tech digest.

Your task is to review the editor's draft of the newsletter and provide feedback on its structure, content, and overall quality.

Be objective and analytical.

You will receive:
- The current state of the newsletter editing process, including recommendations from curators and previous messages sent by the chief editor.

Write your review in the "review" field as a string.
</instructions>
`;
        return `
${coreInstructions}
<editor-comments format="text">
${input.editorComments}
</editor-comments>

<state format="json">
${JSON.stringify(input.state, null, 2)}
</state>
${coreInstructions}
`;
    },
    z.object({
        review: z.string(),
    }),
);

const gpt5High = createModelReference({
    provider: "openai",
    model: "gpt-5",
    params: {
        reasoning: {
            effort: "high",
        },
    },
});

const pro25 = createModelReference({
    provider: "google-gemini",
    model: "gemini-2.5-pro",
});

export async function curateHackerNews(): Promise<string> {
    const articles = await fetchHackerNews();

    const state: EditorState = {
        today: new Date().toISOString(),
        articles,
        steps: [],
    };

    const editorBound = editor.bindToModel(gpt5High);
    const reviewerBound = reviewer.bindToModel(pro25);

    while (true) {
        const message = await editorBound(state);
        state.steps.push(message);
        console.log("Editor message:", message);
        if (message.type === "final") {
            return message.newsletter;
        } else if (message.type === "fetch") {
            const contents = await Promise.all(
                message.urls.map(async (url) => {
                    try {
                        const res = await fetch(url);
                        const html = await res.text();
                        return {
                            url,
                            content: html.slice(0, 20_000),
                        };
                    } catch (e) {
                        return {
                            url,
                            content: `Failed to fetch: ${e}`,
                        };
                    }
                }),
            );
            const results = contents;
            state.steps.push({
                type: "fetch-results",
                results,
            });
        } else if (message.type === "consult") {
            const review = await reviewerBound({
                state,
                editorComments: message.comments,
            });
            console.log("Reviewer feedback:", review);
            state.steps.push({
                type: "consult-feedback",
                respondToId: message.id,
                feedback: JSON.stringify(review),
            });
        } else if (message.type === "consult-feedback") {
            // noop
        } else if (message.type === "fetch-results") {
            // noop
        } else if (message.type === "consider") {
            // noop
        } else {
            const _exhaustiveCheck: never = message;
        }
    }
}

if (require.main === module) {
    (async () => {
        const newsletter = await curateHackerNews();
        console.log("Final newsletter:\n", newsletter);
    })();
}

Notes on zod

Many valid Zod schemas are invalid for OpenAI SDK, only throwing at runtime. For example, you must wrap your shape in a z.object({ result: <your shape> }); or that .optional() has to be used with .nullable().

llamada wraps the return in object() and unwraps the result for you so you can use bare z.string().

Limits

  • Unary functions only. No support for multi-parameter functions yet (but you can of course pass an object).