npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

open-bottlenose

v0.1.0

Published

Deterministic context and memory governor for agent frameworks (SQLite + optional Chroma).

Readme

open-bottlenose

A deterministic context and token governor for AI agents.

Most agent frameworks have a hidden assumption:

If we send more context, the agent will behave better.

In practice, the opposite happens.

Agents do not usually fail because the model is weak. They fail because the prompt becomes unstable.

When the context grows, frameworks silently:

  • drop instructions
  • truncate messages
  • overflow system prompts
  • forget earlier facts
  • repeat failed tool calls

This is why agents work for 3 steps… and then collapse.

open-bottlenose prevents that.

It sits between your agent and your LLM and controls what the model is allowed to see.

Agent → Bottlenose → LLM → Agent

Your agent still thinks freely.

Your model only receives safe, bounded, intentional context.


What problem this actually solves

If you have built an agent, you have likely seen at least one of these:

  • The agent forgets system instructions
  • It repeats the same broken action
  • Tool output floods the prompt
  • Retrieval dumps 20k tokens into a 16k model
  • It becomes random after long runs
  • Costs spike unpredictably
  • Fixes stop working after several turns

These are not reasoning failures.

They are token failures.

Modern frameworks rarely manage token budgets. They just keep appending messages until the model breaks.

open-bottlenose introduces a hard rule:

The model never receives uncontrolled context.


Token Control (the core feature)

Before every model call, Bottlenose computes a deterministic budget.

Example:

Model context: 16,000 tokens

System rules: 1,000 reserved
Model response: 1,200 reserved
Tool outputs: 2,000 reserved
Memory: 2,500 reserved

Available for user + retrieval: 9,300

Anything beyond that is intentionally excluded.

Not truncated randomly. Not silently removed.

Deliberately governed.

This alone fixes most unstable agent behavior.


Why this changes agents dramatically

Without token governance:

  • important rules fall out of context
  • retries multiply
  • tools dominate reasoning
  • memory corrupts the prompt
  • behavior becomes inconsistent

With token governance:

  • system rules persist
  • loops are reduced
  • memory stabilizes
  • tool usage becomes reliable
  • long-running agents remain predictable

Bottlenose is effectively a context operating system for agents.

It does not make the model smarter.

It prevents the model from being sabotaged by its own inputs.


Memory (secondary feature)

Memory only works if context is stable.

Most agent memory systems fail because:

more memory → larger prompt → truncation → memory disappears

Bottlenose solves this by allocating fixed context space for memory and filtering it before injection.

It stores operational knowledge, not conversations.

Examples of stored knowledge:

  • working commands
  • environment constraints
  • root causes
  • stable fixes
  • project rules

The 3-Layer Memory System

Layer 0 — Session Memory

Temporary scratch memory for the current task. Prevents repeating the same mistake within one run.

Layer 1 — Working Memory (SQLite)

Persistent operational facts.

Created automatically:

./bottlenose/memory.sqlite

This is the primary intelligence layer.

Layer 2 — Long-Term Memory (Optional)

Historical knowledge search.

Important rule:

Long-term memory never injects directly. It proposes candidates that Bottlenose compresses into safe context snippets.


What Bottlenose actually does each turn

Preflight

Before the LLM:

  • selects relevant memory
  • filters retrieval
  • removes junk context
  • calculates token budgets
  • builds a stable prompt

Postflight

After the LLM:

  • extracts durable knowledge
  • prevents memory pollution
  • writes auditable memory records

The model never sees raw agent context again.


Installation