npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@llamahair/client

v1.0.1

Published

Node.js client for LlamaHair services

Readme

LlamaHair Node.js Client

A Node.js client library for interacting with LlamaHair services. This library provides a simple interface for sending prompts and retrieving responses from the LlamaHair API.

Installation

npm install @llamahair/client

Quick Start

import { LlamahairClient } from '@llamahair/client';

// Initialize the client
const client = new LlamahairClient({
    apiKeyId: 'your-api-key-id',
    apiKeySecret: 'your-api-key-secret'
});

// Send a prompt and get response in one call
const response = await client.sendAndRetreive('https://your-prompt-url', {
    llama: {
        id: 'unique-id',
        body: 'Your prompt text'
    }
});

console.log(response.response.output);

Authentication

The library uses a secure authentication mechanism that includes:

  • API Key authentication (apiKeyId and apiKeySecret)
  • Request signing (automatically handled by the client)
  • Timestamp-based request validation

Each request to the API includes:

  • X-API-Key header with your API key ID
  • X-Timestamp header with the current timestamp
  • X-Signature header with a request-specific signature

Configuration

You can configure the client either through environment variables or by passing options to the constructor:

Environment Variables

LLAMAHAIR_API_KEY_ID=your-api-key-id
LLAMAHAIR_API_SECRET=your-api-key-secret
LLAMAHAIR_BASE_URL=https://api.llamahair.ai  # Optional: defaults to https://api.llamahair.ai

Constructor Options

const client = new LlamahairClient({
    apiKeyId: 'your-api-key-id',
    apiKeySecret: 'your-api-key-secret'
});

API Reference

LlamahairClient

send(promptUrl: string, request: LlamaSendRequest): Promise<LlamaSendResponse>

Sends a prompt to the specified URL and returns a job ID.

const jobIdResponse = await client.send('https://your-prompt-url', {
    llama: {
        id: 'unique-id',
        body: 'Your prompt text'
    }
});

retreive(request: LlamaOutputRequest): Promise<LlamaResponse>

Retrieves the results for a specific job ID. This method implements a polling mechanism that:

  • Automatically retries every 500ms until the response is ready
  • Times out after 45 seconds of polling
  • Handles different response statuses appropriately
const response = await client.retreive({ jobId: 'your-job-id' });

The retreive operation can result in different statuses:

  • completed: The request was successful and the response is available
  • failed: The request failed (throws an error with details)
  • Other statuses (like pending): Will trigger automatic retry after 250ms delay

The operation will:

  • Return immediately if status is completed
  • Throw an error if status is failed
  • Throw an error if polling exceeds 45 seconds
  • Automatically retry with 250ms delay for any other status

sendAndRetreive(promptUrl: string, request: LlamaSendRequest): Promise<LlamaResponse>

Convenience method that combines send and retreive operations.

const response = await client.sendAndRetreive('https://your-prompt-url', {
    llama: {
        id: 'unique-id',
        body: 'Your prompt text'
    }
});

Response Types

The API can return different types of responses:

type LlamaResponse = {
    type: "response";
    id: string;
    identifier: string;
    timestamp: number;
    response: {
        output?: string;         // Single string output
        outputs?: string[];      // Array of string outputs
        summary?: string;        // Summary of the response
        extracted_values?: {     // Key-value pairs of extracted data
            key: string,
            value: string
        }[];
    }
}

Webhook Validation

The library includes a webhook validator for handling LlamaHair webhooks:

import { LlamahairWebhookValidator } from '@llamahair/client';

const validator = new LlamahairWebhookValidator({
    secret: 'your-webhook-secret'
});

// Check if request needs validation
if (validator.shouldValidate(req)) {
    const signature = validator.validate({
        type: req.body.type,
        timestamp: req.body.timestamp,
        value: req.body.value
    });
    // Compare signature with request signature
}

Error Handling

The library throws specific errors for different scenarios:

  • Request timeout (after 45 seconds)
  • Request failure with status details
  • API authentication errors
  • Network-related errors

It's recommended to wrap API calls in try-catch blocks:

try {
    const response = await client.sendAndRetreive('https://your-prompt-url', {
        llama: {
            id: 'unique-id',
            body: 'Your prompt text'
        }
    });
} catch (error) {
    console.error('Error:', error);
}