npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@far-world-labs/verblets

v0.3.2

Published

Verblets is a collection of tools for building LLM-powered applications.

Downloads

20

Readme

Verblets

Verblets /ˈvaɪb.ləts/ is a utility library of AI-powered functions for creating new kinds of applications. Verblets are composable and reusable while constraining outputs to ensure software reliability. Many of the API interfaces are familiar to developers but support intelligent operations in ways classical analogues do not, sometimes via complex algorithms that have surprising results.

Instead of mimicking humans in order to automate tasks, an AI standard library extends people in new ways via intelligent software applications that are built on those utilities. Top-down AI approaches to automate people lead to forgetful, error-prone NPC-like replicas that can't be shaped the way software can. By contrast, AI-based software tools lead to new roles and new ways of working together for us humans.

Why the name? Verblets are verbally based: they're LLM-powered; and you can think of functions as verbs.

Repository Guide

Quick Links

  • Chains - Prompt chains and algorithms based on LLMs
  • Verblets - Core AI utility functions. At most a single LLM call, no prompt chains.
  • Library Helpers - Utility functions and wrappers
  • Prompts - Reusable prompt templates
  • JSON Schemas - Data validation schemas

Utilities

Primitives

Primitive verblets extract basic data types from natural language with high reliability. They constrain LLM outputs to prevent hallucination while handling the complexity of human expression.

  • bool - Interpret yes/no, true/false, and conditional statements
  • date - Parse dates from relative expressions, natural language, standard formats, and longer descriptions
  • enum - Convert free-form input to exactly one of several predefined options
  • number - Convert a block of text to a single number
  • number-with-units - Parse measurements and convert between unit systems

Math

Math chains transform values using conceptual reasoning and subjective judgments beyond simple calculations.

  • scale - Convert qualitative descriptions to numeric values. Uses a specification generator to maintain consistency across invocations.

Lists

List operations transform, filter, and organize collections. They handle both individual items and batch processing for datasets larger than a context window. Many list operations support bulk operation with built-in retry. Many have alternative single invocation versions in the verblets directory. Many utilities have list support via specification-generators that maintain continuity, or prompt fragments that adapt single-invcation behavior to list processing.

  • central-tendency - Find the most representative examples from a collection
  • detect-patterns - Identify repeating structures, sequences, or relationships in data
  • detect-threshold - Find meaningful breakpoints in numeric values, for use in metrics and alerting
  • entities - Extract names, places, organizations, and custom entity types
  • filter - Keep items matching natural language criteria through parallel batch processing
  • find - Return the single best match using parallel evaluation with early stopping
  • glossary - Extract key terms and generate definitions from their usage
  • group - Cluster items by first discovering categories then assigning members
  • intersections - Find overlapping concepts between all item pairs
  • list - Extract lists from prose, tables, or generate from descriptions
  • list-expand - Add similar items matching the pattern of existing ones
  • map - Transform each item using consistent rules applied in parallel batches
  • reduce - Combine items sequentially, building up a result across batches
  • score - Rate items on multiple criteria using weighted evaluation
  • sort - Order by complex criteria using tournament-style comparisons
  • tags - Apply vocabulary-based tags to categorize items

Content

Content utilities generate, transform, and analyze text while maintaining structure and meaning. They handle creative tasks, system analysis, and privacy-aware text processing.

  • anonymize - Replace names, dates, and identifying details with placeholders
  • category-samples - Create examples ranging from typical to edge cases
  • collect-terms - Find domain-specific or complex vocabulary
  • commonalities - Identify what items share conceptually, not just literally
  • conversation - Manage multi-turn dialogues with memory and context tracking
  • disambiguate - Determine which meaning of ambiguous terms fits the context
  • dismantle - Break down systems into parts, subparts, and their connections
  • document-shrink - Remove less relevant sections while keeping query-related content
  • fill-missing - Predict likely content for redacted or corrupted sections
  • filter-ambiguous - Flag items that need human clarification
  • join - Connect text fragments by adding transitions and maintaining flow
  • name - Parse names handling titles, suffixes, and cultural variations
  • name-similar-to - Generate names following example patterns
  • people - Build artificial person profiles with consistent demographics and traits. Useful as LLM roles.
  • pop-reference - Match concepts to movies, songs, memes, or cultural touchstones
  • questions - Generate follow-up questions that branch from initial inquiry
  • relations - Extract relationship tuples from text
  • schema-org - Convert unstructured data to schema.org JSON-LD format
  • socratic - Ask questions that reveal hidden assumptions and logic gaps
  • split - Find topic boundaries in continuous text
  • summary-map - Build layered summaries for navigating large documents
  • tag-vocabulary - Generate and refine tag vocabularies through iterative analysis
  • themes - Surface recurring ideas through multi-pass extraction and merging
  • timeline - Order events chronologically from scattered mentions
  • to-object - Extract key-value pairs from natural language descriptions
  • truncate - Remove trailing content after a semantic boundary
  • veiled-variants - Reword queries to avoid triggering content filters

Utility Operations

Utility operations are uncategorized functionality like automatic tool selection, intent parsing, and context compression.

  • ai-arch-expect - Validate AI architecture constraints and patterns
  • auto - Match task descriptions to available tools using function calling
  • expect - Check if conditions are met and explain why if not
  • expect chain - Validate complex data relationships with detailed failure analysis
  • intent - Extract action and parameters from natural language commands
  • llm-logger - Summarize log patterns and detect anomalies across time windows
  • sentiment - Classify as positive, negative, or neutral with nuance detection
  • set-interval - Schedule tasks using natural language time descriptions

Codebase

Codebase utilities analyze, test, and improve code quality using AI reasoning.

  • scan-js - Examine JavaScript for patterns, anti-patterns, and potential issues
  • test - Generate test cases covering happy paths, edge cases, and error conditions
  • test-advice - Identify untested code paths and suggest test scenarios

Library Helpers

Helpers support higher-level operations. They make no LLM calls and are often synchronous.

  • chatgpt - OpenAI ChatGPT wrapper
  • prompt-cache - Cache prompts and responses
  • retry - Retry asynchronous calls
  • ring-buffer - Circular buffer implementation for running LLMs on streams of data

Contributing

Help us explore what's possible when we extend software primitives with language model intelligence.

License

All Rights Reserved - Far World Labs