npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@prism-lang/core

v3.0.0

Published

A programming language for uncertainty

Readme

@prism-lang/core

Core implementation of the Prism programming language - a language designed for expressing and managing uncertainty in computational systems.

📚 Full Documentation | 🚀 Getting Started | 📖 API Reference

Installation

npm install @prism-lang/core

Quick Start

The easiest way to run Prism code is using the runPrism helper function, which handles parsing and runtime setup for you.

Simple Usage

import { runPrism } from '@prism-lang/core';

// Run Prism code directly
const result = await runPrism('const x = 5 ~> 0.9; x * 2');
console.log(result); // ConfidenceValue { value: 10, confidence: 0.9 }

// Access the actual value
console.log(result.value); // 10
console.log(result.confidence); // 0.9

// With custom globals
const result2 = await runPrism('const area = PI * radius * radius; area', {
  globals: { PI: 3.14159, radius: 5 }
});
console.log(result2); // 78.53975

// With LLM provider
import { MockLLMProvider } from '@prism-lang/llm';

const provider = new MockLLMProvider();
provider.setMockResponse('Hello! How can I help?', 0.85);

const result3 = await runPrism('const response = llm("Hello"); response', {
  llmProvider: provider
});
console.log(result3.value); // "Hello! How can I help?"
console.log(result3.confidence); // 0.85

Note: The globals option currently only supports primitive values (numbers, strings, booleans) and simple objects. Functions cannot be injected as globals due to runtime limitations.

Advanced Usage

import { parse, createRuntime } from '@prism-lang/core';

const code = `
  // Confidence values
  const prediction = "rain" ~> 0.8
  const temperature = 72 ~> 0.95
  
  // Confidence-aware control flow
  uncertain if (prediction) {
    high { "definitely raining" }
    medium { "might rain" }
    low { "probably sunny" }
  }
`;

const ast = parse(code);
const runtime = createRuntime();
const result = await runtime.execute(ast);

Features

  • Confidence Values: First-class support for uncertainty with the ~> operator
  • Confidence Operators: Extract (<~), multiply (~*), combine (~||>)
  • Uncertain Control Flow: uncertain if/while/for with high/medium/low/default branches
  • Adaptive Confidence: Default branches for confidence recalibration and fallback logic
  • Pattern Matching: Rust-style match expressions with guards and nested patterns
  • Confidence Helpers: Built-in consensus() and aggregate() helpers
  • LLM Integration: Built-in llm() function with automatic confidence
  • Functional Programming: Lambdas, array methods, destructuring
  • Pipeline Operator: Chain operations with |> for cleaner code

Language Guide

Basic Confidence

// Assign confidence
const value = 100 ~> 0.9

// Extract confidence
const conf = <~ value

// Confidence operations
const doubled = value ~* 2  // Maintains confidence
const combined = value1 ~||> value2  // Picks highest confidence

Uncertain Control Flow

uncertain if (measurement) {
  high {
    // confidence >= 0.7
    perform_critical_action()
  }
  medium {
    // 0.5 <= confidence < 0.7
    request_human_review()
  }
  low {
    // confidence < 0.5
    abort_and_log()
  }
}

// With default branch for unmatched cases
uncertain while (sensor_reading ~> confidence) {
  high {
    process_automatically()
  }
  low {
    skip_reading()
  }
  default {
    // Handle cases where branch is missing
    // or confidence needs recalibration
    confidence = recalibrate_sensor()
    if (confidence < 0.1) break
  }
}

LLM Integration

// Note: llm() requires provider setup first
const response = llm("Is this safe?")
const conf = <~ response  // Automatic confidence extraction

// With confidence threshold
const safe = response ~> 0.9
if (<~ safe >= 0.9) {
  proceed()
}

llm(prompt, options?) accepts per-call configuration. Pass { provider, model, temperature, maxTokens, topP, timeout, structuredOutput, includeReasoning, confidenceExtractor } to tweak the LLMRequest, or supply an extractor function that receives the raw response object and returns a custom confidence (number or confident value).

Need live tokens? Use stream_llm():

handle = stream_llm("Summarize the incident report", { provider: "claude", structuredOutput: false })

chunk = await handle.next()
while (chunk) {
  console.log(chunk.text)
  chunk = await handle.next()
}

final = await handle.result()
console.log("Summary:", final.value)
const analysis = llm("Assess rollout risk", {
  provider: "claude",
  model: "claude-3-sonnet",
  temperature: 0.25,
  extractor: response => response.confidence * 0.9
})

API Reference

Helper Functions

  • runPrism(code: string, options?: RunPrismOptions): Promise<Value> - Run Prism code directly
    • options.globals - Object with global variables to inject
    • options.llmProvider - LLM provider instance
    • options.defaultProviderName - Name for the LLM provider (default: 'default')

Parser

  • parse(code: string): Program - Parse Prism code into AST

Runtime

  • createRuntime(options?: RuntimeOptions): Runtime - Create a new runtime. Pass { moduleSystem } to share a preconfigured module loader (custom file readers, caches) across runtimes, and { confidence } to tune confidence propagation/provenance.
  • runtime.execute(ast: Program): Promise<Value> - Execute AST
  • runtime.registerLLMProvider(name: string, provider: LLMProvider) - Register LLM provider
  • runtime.setDefaultLLMProvider(name: string) - Set default LLM provider
  • runtime.getModuleSystem(): ModuleSystem - Access the module system used by the runtime
  • runtime.invalidateModule(path: string, options?: { invalidateDependents?: boolean }) - Drop a module (and optionally its dependents) from the cache
  • runtime.reloadModule(path: string, options?: { invalidateDependents?: boolean }): Promise<Module> - Convenience helper that invalidates the module, reloads it, and returns the new Module object (useful for hot reload tools)

RuntimeOptions

interface RuntimeOptions {
  moduleSystem?: ModuleSystem; // defaults to a new ModuleSystem()
  confidence?: {
    strategy?: {
      combineMode?: 'average' | 'min' | 'max' | 'multiply';
      consensus?: 'max' | 'min' | 'average';
    };
    trackProvenance?: boolean;
  };
}

Value Types

  • NumberValue, StringValue, BooleanValue
  • ArrayValue, ObjectValue, FunctionValue
  • ConfidenceValue - Wraps any value with confidence
  • NullValue (the undefined keyword maps to null)

Built-in Functions

  • llm(prompt: string, options?: LLMCallOptions) - Make LLM calls (requires provider setup). options supports { provider, model, temperature, maxTokens, topP, timeout, structuredOutput, includeReasoning, confidenceExtractor, extractor }.
  • stream_llm(prompt: string, options?: LLMCallOptions) - Return a stream handle with next(), cancel(), and result() helpers for incremental output.
  • consensus(values: Array<value>, options?: { strategy?: 'max' | 'min' | 'average' }) - Pick a representative confident value from multiple candidates.
  • aggregate(values: Array<value>, options?: { mode?: 'average' | 'min' | 'max' | 'multiply' }) - Combine confidence from multiple candidates.
  • map(array: Array, fn: Function) - Map over array elements
  • filter(array: Array, fn: Function) - Filter array elements
  • reduce(array: Array, fn: Function, initial?: any) - Reduce array to single value
  • max(...values: number[]) - Find maximum value
  • min(...values: number[]) - Find minimum value
  • Array and Object methods are available as built-in functions
  • Note: Use console.log() for output (no built-in print function)

Array Methods

Arrays support the following methods:

  • .map(fn) - Transform elements
  • .filter(fn) - Filter elements
  • .reduce(fn, initial?) - Reduce to single value
  • .push(...items) - Add elements
  • .forEach(fn) - Iterate over elements
  • .join(separator?) - Join elements as string
  • .length - Get array length

Examples

See packages/prism-examples for more complex examples.

Related Packages

License

MIT