npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

lineage-code-mini

v0.1.4

Published

Behavioral adaptation for AI agents with user profiles, prompt adaptation, and fitness tracking

Downloads

551

Readme

Lineage Code Mini

Behavioral adaptation layer for AI agents. It helps an agent learn how each user likes to be talked to.

Without it, every conversation starts cold — same tone, same length, same assumptions. With it, the agent adapts: response style, topic selection, timing awareness, and self-correction when its approach stops working.

Three concepts from the Lineage Engine, distilled into a zero-dependency TypeScript library for AI agents.

This package is designed for agent builders and OpenClaw users who want manual control over when adaptation runs. It provides the profile logic, prompt adaptation, and skill workflow. Your host agent or integration decides when to record interactions, compactify history, and apply the resulting context.

Links

  • GitHub: https://github.com/PabloTheThinker/lineage-code-mini
  • npm: https://www.npmjs.com/package/lineage-code-mini
  • ClawHub skill: https://clawhub.ai/pablothethinker/lineage-mini

Install

npm install lineage-code-mini

OpenClaw / ClawHub:

npx clawhub@latest install lineage-mini

ESM:

import { pipeline } from 'lineage-code-mini'

CommonJS:

const { pipeline } = require('lineage-code-mini')

Compatibility

The core library works in Node, Bun, browser bundles, and serverless runtimes. Deno should be possible but is not currently verified. The shipped OpenClaw / ClawHub skill requires Node.

Product Boundary

  • This is a manual-use library and skill, not an automatic conversation-loop hook.
  • OpenClaw users can install the skill and follow its documented workflow.
  • Other AI agent builders can install the npm package and wire it into their own agent loop.
  • The package does not self-attach to an agent. The user or builder chooses when to use it.

Quick Start

import { pipeline } from 'lineage-code-mini'

// Feed it the user's interaction history + your agent's base prompt
const { context, profile } = pipeline(userId, interactions, basePrompt)

// context.prompt is your adapted system message — use it with any LLM
const response = await llm.chat({ system: context.prompt, user: message })

// context.active_patterns tells you which behavioral frames activated
// context.fitness tells you how well your agent is serving this user

How It Works

1. Compactify

Compresses interaction history into a statistical profile. 500 conversations become 8 signals.

import { compactify, DEFAULT_CONFIG } from 'lineage-code-mini'

const profile = compactify(userId, interactions, DEFAULT_CONFIG)

// profile.acceptance_rate    → 0.74
// profile.preferred_style    → "direct"
// profile.strong_topics      → ["code", "deploy", "architecture"]
// profile.weak_topics        → ["small-talk", "planning"]
// profile.productive_hour    → 10
// profile.fitness            → 0.68

The agent doesn't replay history. It reads a profile: "This user prefers direct answers, engages most with code topics, ignores long explanations, and is most responsive at 10am."

2. Patterns

11 built-in cognitive frames that activate based on the user's profile. Each one injects a behavioral hint into the agent's system prompt.

| Pattern | Activates When | What It Does | |---|---|---| | style_direct | User prefers short responses | "Lead with the answer. Skip preamble." | | style_detailed | User engages with depth | "Include reasoning, examples, context." | | style_casual | User responds to casual tone | "Be natural. Contractions are fine." | | style_formal | User expects structure | "Maintain formal, precise language." | | low_acceptance | <30% acceptance rate | "Keep it SIMPLE. Ask before explaining." | | high_acceptance | >70% acceptance rate | "Trust is established. Be more expressive." | | strong_topics | Known engagement topics | Lean toward topics user engages with | | weak_topics | Known ignored topics | Avoid leading with these | | productive_hour | User's best time of day | "They're receptive now. Go substantive." | | off_hours | Outside active hours | "Keep it lighter and shorter." | | fitness_alarm | Fitness < 0.35 | "Change your approach. This isn't working." |

Add custom patterns:

import { adapt } from 'lineage-code-mini'
import type { CognitivePattern } from 'lineage-code-mini'

const devMode: CognitivePattern = {
  name: "developer_user",
  description: "User is a developer — use code examples",
  condition: (p) => p.strong_topics.includes("code"),
  hint: () => "This user is technical. Use code examples instead of prose when possible.",
  priority: 6,
}

const context = adapt(basePrompt, profile, 3, [devMode])

3. Adapt

Takes the agent's base prompt and the user's profile. Returns an adapted prompt with the right behavioral frames injected.

import { adapt } from 'lineage-code-mini'

const context = adapt(basePrompt, profile)

// context.prompt includes:
// BEHAVIORAL ADAPTATION (learned from 47 interactions, 74% acceptance rate):
// - This user prefers SHORT, DIRECT responses. Lead with the answer.
// - Topics they engage with most: code, deploy, architecture.
// - Topics they tend to ignore: small-talk, planning.
// - They usually engage for 12 minutes. Size responses accordingly.

OpenClaw Integration

Live skill listing: https://clawhub.ai/pablothethinker/lineage-mini

The OpenClaw skill is intended to be installed and used explicitly by the host agent. It provides commands, storage layout, and adaptation logic, but it does not silently wire itself into every conversation.

Generate a section for your agent's SOUL.md or USER.md:

import { asSoulPatch } from 'lineage-code-mini'

const patch = asSoulPatch(profile)
// Append to SOUL.md or USER.md

Output:

## Behavioral Adaptation (Lineage Code Mini)

Based on 47 interactions (74% acceptance).

**Response style:** Keep responses SHORT and DIRECT. Lead with the answer.
**Engages with:** code, deploy, architecture, review
**Ignores:** small-talk, planning, weather

Inject into a skill's runtime context:

import { asSkillContext } from 'lineage-code-mini'

const ctx = asSkillContext(profile)
// Returns a serializable object for skill injection

Recording Interactions

Build interaction objects to feed back into compactify():

import { recordInteraction } from 'lineage-code-mini'

const interaction = recordInteraction(
  "msg-123",           // id
  "how do I deploy?",  // user input
  "Run: git push",     // agent output
  true,                // accepted (user engaged positively)
  {
    channel: "telegram",
    engagement_seconds: 45,
    tags: ["deploy", "help"],
  }
)

// Store these, then pass the array to compactify()

Fitness Score

The fitness score (0–1) tracks how well the agent is serving each user. It's a weighted blend of overall acceptance rate (40%) and the last 10 interactions (60%).

When fitness drops below 0.35, the fitness_alarm pattern fires and injects: "CRITICAL: Recent responses have not been well-received. Change your approach."

The agent self-corrects without anyone intervening.

API Reference

Core Functions

| Function | Input | Output | |---|---|---| | compactify(userId, interactions, config) | Raw history | UserProfile | | adapt(basePrompt, profile, minInteractions?, extraPatterns?) | Prompt + profile | AdaptationContext | | pipeline(userId, interactions, basePrompt, config?) | Everything at once | { context, profile } | | route(profile, extraPatterns?) | Profile | string[] (hints) |

OpenClaw Helpers

| Function | Input | Output | |---|---|---| | asSoulPatch(profile) | Profile | Markdown for SOUL.md | | asSkillContext(profile) | Profile | Serializable context object | | recordInteraction(id, input, output, accepted, options?) | Event data | Interaction |

Configuration

import { DEFAULT_CONFIG } from 'lineage-code-mini'

// DEFAULT_CONFIG:
// {
//   min_interactions: 3,        // personalization starts after 3 interactions
//   consolidation_window: 100,  // analyze last 100 interactions
//   fitness_alarm: 0.35,        // self-correct below this threshold
// }

Design Principles

  • Zero dependencies. Pure TypeScript. The core library works in Node, Bun, browser bundles, and serverless runtimes. Deno is plausible but unverified. The shipped ClawHub skill requires Node.
  • Framework-agnostic. Not tied to any LLM provider, agent framework, or database.
  • Privacy-first. Profiles are computed locally. No data leaves your system.
  • Invisible to users. They don't configure anything. They just notice the agent gets better.

License

MIT — Vektra Technologies