npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@okeyamy/lua

v5.0.7

Published

A client side A/B tester

Readme

A progressive, client/server AB testing library.

npm bower

Lua is an AB testing library designed to be clear, minimal, and flexible. It works in both the server and browser with the use of driver-based persistence layers.

You can download the compiled javascript directly here

Watch the demo video — UTM personalization and AI-powered content in action.


Problem Statement

Enterprises and product teams need to show the right message to the right user at the right time—without maintaining separate pages per campaign or channel. Static copy forces one-size-fits-all experiences: the same headline for a paid ad visitor and an organic social visitor, the same CTA for first-time and returning users. Manual A/B tests scale poorly, and full-stack personalization often requires heavy platforms, server-side logic, and ongoing engineering. Developers and marketers need a lightweight, programmable way to run experiments and personalize content by traffic source, intent, and history while keeping the core site simple and fast.


Solution

Lua provides a progressive, client/server A/B testing and personalization layer that stays small and optional. You define tests and content variants in code; Lua handles bucketing, persistence, and—optionally—UTM- and AI-driven personalization. No separate CMS or experimentation platform is required. Use the core for classic A/B or multivariate tests; add UTM-based personalization to tailor content by campaign and channel; opt in to AI (OpenAI) to select the best variant or generate dynamic copy from context. Everything can run in the browser with pluggable storage (localStorage, cookies, memory) or on the server, so it fits static sites, SPAs, and full-stack apps. The library is documented, tested, and built so enterprise teams and developers can ship dynamic website experiences without rewriting their stack.


Core Features

| Area | Capability | |------|------------| | A/B & multivariate testing | Define tests with weighted buckets (A/B/C/D…), assign users consistently, persist via configurable storage drivers. | | UTM-based personalization | Infer intent from utm_source, utm_medium, utm_campaign, referrer, and device; show the best-matching content variant automatically. | | AI-powered personalization | Optional OpenAI integration: Select mode picks the best predefined variant; Generate mode creates headline, subheadline, and CTA copy from context and brand guidelines. | | Weighted user history | Store visit context in localStorage with exponential decay; returning users get personalization that considers both current UTM and past behavior. | | Runtime & storage | Browser and server support; drivers for localStorage, cookies, memory, or custom stores. | | Performance & safety | Lightweight core (~3.8kb); optional AI with caching, timeouts, retries, and fallback to rule-based engine; proxy support for keeping API keys off the client. |


Implementation & Technology

  • Integration: Script tags or bundler; no framework lock-in. Works with static HTML, React, Vue, or any JS environment.
  • Persistence: Pluggable store interface; built-in drivers for localStorage, cookies, and in-memory; custom drivers for server or hybrid use.
  • Personalization pipeline: UTM parsing → intent inference → variant selection (or AI select/generate) → DOM or data application via data-personalize and templates.
  • AI path: Configurable model (default gpt-4o-mini), optional proxy URL for production, structured prompts, and validation of AI responses with fallback.
  • Testing & quality: Jest test suites for core, AI, and weighted history; lint and build via npm/pnpm scripts; demo server for local tryout with optional API key injection.

Expected Results (Enterprise, Developers, End Users)

| Audience | Outcome | |----------|--------| | Enterprise / product teams | One library for A/B tests and dynamic copy; fewer one-off integrations; control over variants and brand voice; optional AI without vendor lock-in. | | Developers | Clear API, small bundle, standard JS; add UTM or AI without rewriting the app; run tests and demo locally; deploy to CDN or existing stack. | | Marketers / growth | Campaign-specific messaging (e.g. Google vs Facebook vs email) and returning-user awareness without separate landing pages or complex tooling. | | End users | Relevant headlines and CTAs based on where they came from and prior visits; faster, more consistent experience with minimal layout shift. |


Technology Used

| Layer | Technology | |-------|------------| | Language | JavaScript (ES5+); Babel for legacy builds. | | Build | Rollup; UMD + ES module outputs; minification. | | Testing | Jest; unit and integration tests for core Lua, UTM, AI personalization, weighted history. | | Linting | ESLint. | | AI / personalization | OpenAI Chat Completions API (e.g. gpt-4o-mini); fetch with retry and timeout; optional backend proxy. | | Storage (browser) | localStorage (weighted history, AI cache); configurable cookie or memory stores. | | Runtime | Browser and Node; no framework dependency. |



Features

  • Powerful, clear API
  • Many variations. ABCD testing
  • Intelligent weighted bucketing
  • Browser & Server support
  • Storage Drivers: localStorage, cookies, memory, or build your own
  • AI-Powered Personalization using OpenAI GPT models (opt-in)
  • UTM-Based Content Personalization with automatic intent inference
  • Weighted User History with exponential decay for returning visitors
  • Well documented, tested, and proven in high production environments
  • Lightweight, weighing in at ~ 3.8kb (core), with optional AI modules
  • Not tested on animals

Installing

# Via NPM
npm i lua --save

# Via Bower
bower i lua --save

# Via Yarn
yarn add lua

Developing

# Install dependencies (use pnpm, npm, or yarn)
pnpm install
# OR: npm install
# OR: yarn install

# Build the library
pnpm run build
# OR: npm run build

# Run linting
pnpm run lint
# OR: npm run lint

# Run all tests (83 tests across 6 suites)
pnpm test
# OR: npm test

Run a single test file

# Run only core Lua tests
pnpm test -- src/__tests__/unit.js

# Run only AI personalization tests
pnpm test -- src/__tests__/ai-personalize.test.js

# Run only weighted history tests
pnpm test -- src/__tests__/weighted-history.test.js

Run the demo

Try the UTM personalization and AI-powered demo locally:

# Option 1: Test server with auto-loaded API key (RECOMMENDED)
pnpm run test:demo
# OR: node test-demo.js

# This will:
# - Read your OpenAI API key from .env file
# - Auto-fill the API key in the demo
# - Enable AI mode by default
# - Serve at http://localhost:3000/demo

If you don't need AI features, you can use a generic static server instead:

# Option 2: Node (npx serve)
npx serve . -p 3000

# Option 3: Python 3
python3 -m http.server 8000

# Option 4: PHP
php -S localhost:8080

Then open in your browser:

On the demo page you can:

  • Use the Test UTM Links in the debug panel to simulate traffic from Google, Facebook, Reddit, etc.
  • Toggle Enable AI Mode, add your OpenAI API key, and click Run AI Personalization to see AI-driven content selection.
  • Use Reset & Reload to clear UTM params, or Clear AI History & Cache to reset stored history.

Note: The demo uses script paths like ../src/utm-personalize.js, so it must be served from the project root (not by opening demo/index.html as a file). The recommended test-demo.js handles this automatically.

Usage

<script src="lua.js"></script>
<script>

  // Set up our test API
  const lua = new Lua({
    store: Lua.stores.local
  });

  // Define a test
  lua.define({
    name: 'new-homepage',
    buckets: {
      control: { weight: 0.6 },
      versionA: { weight: 0.2 },
      versionB: { weight: 0.2 },
    }
  });

  // Bucket the user
  lua.assign();

  // Fetch assignments at a later point
  const info = lua.assignments();
</script>

API

Lua(config)

const lua = new Lua({
  debug: true,
  store: Lua.stores.local
});

This creates a new test API used to defined tests, assign buckets, and retrieve information.

Returns: Object

Name | Type | Description | Default :--- | :--- | :--- | :--- debug | Boolean | Set to true to enable logging of additional information | false store | Object | An object with get/set properties that will accept information to help persist and retrieve tests | Lua.stores.local


lua.define(testData)

// Create your test API
const lua = new Lua();

// Define a test
lua.define({
  name: 'MyTestName',
  buckets: {
    variantA: { weight: 0.5 },
    variantB: { weight: 0.5 },
  },
});

This function defines the tests to be assigned to used during bucket assignment. This function accepts an object with two keys, name and buckets. Alternatively, you may pass an array of similar objects to define multiple tests at once.

The name value is the name of your test. The keys within bucket are your bucket names. Each bucket value is an object containing an object with an optional key weight that defaults to 1.

The percent chance a bucket is chosen for any given user is determined by the buckets weight divided by the total amount of all weights provided for an individual test. If you have three buckets with a weight of 2, 2/6 == 0.33 which means each bucket has a weight of 33%. There is no max for the total weights allowed.

Returns: null

Name | Type | Description | Default :--- | :--- | :--- | :--- data | Object/Array | An object/array of objects containing test and bucket information | null


lua.assign(testName, bucketName)

const lua = new Lua();
lua.define({
  name: 'new-homepage',
  buckets: {
    variantA: { weight: 0.5 },
    variantB: { weight: 0.5 },
  }
});

// Assign buckets from all tests to the user...
lua.assign();

// or assign bucket from the specified test...
lua.assign('new-homepage');

// or specify the bucket from the specified test...
lua.assign('new-homepage', 'variantB');

// or remove the bucketing assignment from the specified test.
lua.assign('new-homepage', null);

Calling the assign method will assign a bucket for the provided tests to a user and persist them to the store. If a user has already been bucketed, they will not be rebucketed unless a bucketName is explicitly provided.

If no arguments are provided, all tests will have a bucket assigned to the user. If the first argument provided is a test name, it will attempt to assign a bucket for that test to a user. If a bucketValue is provided, it will set that user to the specified bucket. If the bucketValue is null, it will remove that users assignment to the bucket.

Returns: null

Name | Type | Description | Default :--- | :--- | :--- | :--- testName (optional) | String | The name of the test to assign a bucket to | null bucketName (optional) | String | The name of the bucket to assign to a user | null


lua.definitions()

const lua = new Lua();
lua.define({
  name: 'new-homepage',
  buckets: {
    variantA: { weight: 0.5 },
    variantB: { weight: 0.5 },
  }
});

// Retrieve all of the provided tests
const tests = lua.definitions();

This provides the user with all of the tests available.

The returned information will be an array if multiple tests were defined, otherwise, it will be an object of the single test defined. The object will mirror exactly what was provided in the define method.

Returns: Object|Array


lua.assignments()

const lua = new Lua();
lua.define({
  name: 'new-homepage',
  buckets: {
    variantA: { weight: 1 },
  }
});

// Capture assignments
lua.assign();

// Retrieve all of the bucket assignments for the user
const buckets = lua.assignments();
assert.strictEqual(buckets['new-homepage'], 'variantA');

This provides the user with all of the bucket assignments for the current user.

The returned information will be an object whose keys will be test names and values will be the current bucket assigned to the user.

// Example return
{
  'new-homepage': 'variantA',
  'some-test': 'some-bucket',
}

Returns: Object|Array


lua.extendAssignments

Extending assignments can be a useful way to augment your Lua implementation with third party software.

const lua = new Lua();

// Create a function that will modify assignments before you call `assignments`
lua.extendAssignments =
  (assignments) => Object.assign(assignments, { foo: 'bar' })

// Retrieve all of the bucket assignments for the user
const buckets = lua.assignments();
assert.strictEqual(buckets['foo'], 'bar');

A more practical example could be to implement with a third party AB testing platform like Optimizely (This uses pseudo code for brevity)

lua.extendAssignments = (assignments) => {
  if (window.optimizely)
    for (const experiment in optimizely.experiments())
      assignments[experiment.name] = experiment.bucket

  return assignments
}

Returns: Object


Guide/FAQ

CSS Driven Tests

Tests logic may be potentially powered on solely CSS. Upon calling assign, if the script is running in the browser, a class per test will be added to the body tag with the test name and bucket in BEM syntax.

<body class="new-homepage--variantA"> <!-- Could be new-homepage--variantB -->
.new-homepage--variantA {
  /* Write custom styles for the new homepage test */
}

Storing metadata associated with tests

Each bucket provided may have additional metadata associated with it, and may have its value retrieved by retrieving the assignments and definitions.

const lua = new Lua();
lua.define({
  name: 'new-homepage',
  buckets: {
    variantA: { weight: 1, foo: 'bar' },
  }
});

lua.assign();

const defs = lua.definitions();
const buckets = lua.assignments();
const bucket = buckets['new-homepage'];
const bar = defs.buckets[bucket].foo; // "bar"

AI-Powered Personalization

Lua includes an optional AI personalization engine that uses OpenAI GPT models to make intelligent content decisions. The AI analyzes UTM parameters, referrer data, device type, and user visit history to select the best content variant or generate new personalized content.

Quick Setup

<!-- Include AI modules after the core script -->
<script src="utm-personalize.js" defer></script>
<script src="storage/weighted-history.js" defer></script>
<script src="prompts/personalization-prompts.js" defer></script>
<script src="ai-personalize.js" defer></script>
// Enable AI personalization
LuaUTMPersonalize.personalize({
    templates: {
        'gaming':      { headline: 'Level Up Your Setup', subheadline: '...', ctaLabel: 'Explore Gaming', ctaLink: '/gaming' },
        'professional': { headline: 'Work Smarter', subheadline: '...', ctaLabel: 'View Collection', ctaLink: '/pro' },
        'default':     { headline: 'Welcome', subheadline: '...', ctaLabel: 'Shop Now', ctaLink: '/shop' }
    },
    enableAI: true,
    aiConfig: {
        apiKey: 'sk-your-openai-key',   // Your OpenAI API key
        model: 'gpt-4o-mini',           // Default model (configurable)
        mode: 'select'                   // 'select' or 'generate'
    }
}).then(function(decision) {
    console.log('AI chose:', decision.intent, 'with confidence:', decision.aiResponse.confidence);
});

Two Modes

Select Mode: AI chooses the best variant from your predefined templates. Predictable, fast, brand-consistent.

Generate Mode: AI creates entirely new headline, subheadline, and CTA text based on user context and your brand guidelines.

Key Features

  • Default model: gpt-4o-mini (configurable to any OpenAI model)
  • Weighted history: Tracks returning users with exponential decay
  • Automatic caching: Caches AI decisions to minimize API calls
  • Graceful fallback: Falls back to standard UTM engine if AI fails
  • Proxy support: Use apiUrl instead of apiKey for production security

For comprehensive setup instructions, configuration reference, and security best practices, see the AI Personalization Guide.


Push to GitHub

To commit your changes and push to GitHub:

# 1. Stage all changes
git add .

# 2. Commit with a descriptive message
git commit -m "feat: your change description"

# 3. Push to your remote (e.g. origin master or main)
git push origin master
# OR, if your default branch is main:
# git push origin main

Before pushing, it’s a good idea to run the build and tests:

pnpm run build
pnpm test
git add .
git commit -m "your message"
git push origin master

To publish the demo to GitHub Pages (so others can try it online):

pnpm run pages
# OR: npm run pages

This pushes the master branch to the gh-pages branch; the demo will be available at https://<your-username>.github.io/lua_package/ (or your repo’s Pages URL).


License

MIT Licensing

Copyright (c) 2026 Okey Amy

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.