npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@ai-d/aid

v0.1.5

Published

Aid provides a structured and type-safe way to interact with LLMs.

Downloads

245

Readme

Aid: TypeScript Library for Typed LLM Interactions

A.I. :D

Aid is a TypeScript library designed for developers working with Large Language Models (LLMs) such as OpenAI's GPT-4 (including Vision) and GPT-3.5. The library focuses on ensuring consistent, typed outputs from LLM queries, enhancing the reliability and usability of LLM responses. Advanced users can leverage few-shot examples for more sophisticated use cases. It provides a structured and type-safe way to interact with LLMs.

Features

  • Typed Response: Aid leverages TypeScript and JSON Schema to ensure consistent, reliable outputs from LLMs, adheres to the predefined schema.
  • Task Based: Easily define custom tasks with specific input and output types, streamlining the process of LLM interactions.
  • Few-Shot Learning Support: Allows for the provision of few-shot prompt examples to guide the LLM in producing the desired output.
  • Visual Task Support: Includes support for visual tasks with image inputs, harnessing the power of OpenAI's GPT-4 Vision. Example
  • OpenAI Integration: Integrates with OpenAI's official library to provide a seamless experience.
  • Customizable: Allows for customization LLM models, just implement the QueryEngine function. Example

Installation

pnpm install @ai-d/aid

Usage

Basic Setup

First, import the necessary modules and set up your OpenAI instance:

import { OpenAI } from "openai";
import { Aid } from "@ai-d/aid";

const openai = new OpenAI({ apiKey: "your-api-key" });
const aid = Aid.from(openai, { model: "gpt-4-1106-preview" });
import { OpenAI } from "openai";
import { Aid, OpenAIQuery } from "@ai-d/aid";

const openai = new OpenAI({ apiKey: "your-api-key" });
const aid = Aid.vision(
    OpenAIQuery(openai, { model: "gpt-4-vision-preview", max_tokens: 2048 }),
);

For example, Cohere's Command.

import { Aid, CohereQuery } from "@ai-d/aid";

const aid = Aid.chat(
    CohereQuery(COHERE_TOKEN, { model: "command" }),
);

You can implement your own QueryEngine function.

Creating a Custom Task

Define a custom task with expected output types:

import { z } from "zod";

const analyze = aid.task(
    "Summarize and extract keywords",
    z.object({
        summary: z.string().max(300),
        keywords: z.array(z.string().max(30)).max(10),
    }),
);
const analyze = aid.task(
    "Analyze the person in the image",
    z.object({
        gender: z.enum(["boy", "girl", "other"]),
        age: z.enum(["child", "teen", "adult", "elderly"]),
        emotion: z.enum(["happy", "sad", "angry", "surprised", "neutral"]),
        clothing: z.string().max(100),
        background: z.string().max(100),
    }),
);

Executing a Task

Execute the task and handle the output:

const { result } = await analyze("Your input here, e.g. a news article");
console.log(result); // { summary: "...", keywords: ["...", "..."] }
const datauri = `data:image/png;base64,${fs.readFileSync("path/to/image.png" "base64")}`;

const { result } = await analyze({ images: [{ url: datauri }] });
console.log(result); // { "gender": "boy", "age": "teen", ... }

Advanced Usage with Few-Shot Examples

For more complex scenarios, you can use few-shot examples:

const run_advanced_task = aid.task(
    "Some Advanced Task",
    z.object({
        // Define your output schema here
    }),
    {
        examples: [
            // Provide few-shot examples here
        ],
    }
);

Formulation

Case Parameter -> (join) Task Defination -> (join) Format Constraint -> (perform) Query

Query and Format Constraint are defined and implemented by the QueryEngine and FormatEngine.

Task Defination is defined by the user with task method. Task Goal, Expected Schema, Examples, etc.

Case Parameter is defined by the user on each single call. Text, Image, etc.

Contributing

Contributions are welcome! Please submit pull requests with any bug fixes or feature enhancements.