@ronderins/intent-cache
v1.0.0
Published
A semantic intent caching library for JS/TS
Downloads
209
Maintainers
Readme
IntentCache — Semantic Intent Caching Library
Version: 1.0.0
Author: ronderins
Overview
IntentCache is a lightweight, high-performance semantic cache designed for developers who need fast, fuzzy matching of intents.
Whether you’re building chatbots, AI agents, API gateways, or caching LLM results, IntentCache helps you store intents and retrieve the closest match, even if the query doesn’t exactly match the stored key.
Think of it as a “smart cache” that understands your intent, not just your exact strings. se
Features
- ✅ Semantic similarity — fuzzy match queries to stored intents.
- ✅ Synonym support —
fetch=get,info=profile, etc. - ✅ Weighted tokens — important words like
userorprofilematter more. - ✅ Debug mode — see the matched intent, score, and value.
- ✅ Bun + Node + ESM support — runs anywhere modern JS runs.
- ✅ TypeScript first — strongly typed with zero runtime overhead.
Installation
Node / npm:
npm install intent-cache
Or
bun add intent-cacheQuick Example:
// Create a cache with a lower threshold for fuzzy matches
const cache = new IntentCache({ threshold: 0.3 })
// Add intents
cache.set("get user profile", { name: "Mika" })
cache.set("retrieve user info", { name: "Mika", role: "admin" })
// Normal retrieval
console.log(cache.get("fetch user data"))
// Output: { name: "Mika" }
// Debug retrieval
console.log(cache.get("fetch user data", { debug: true }))
// Output:
{
value: { name: "Mika" },
score: 0.83,
matchedIntent: "get user profile"
}
How It Works:
Tokenization: Every intent is split into tokens ("get user profile" → ["get","user","profile"]).
Synonym normalization: Words like "fetch" and "retrieve" are normalized to "get" automatically.
Weighted similarity: Important tokens like "user" or "profile" get higher weight, so queries prioritize key information.
Threshold filtering: Only results above the configured threshold are returned. This prevents accidental matches.
Debug mode: Optionally returns { value, score, matchedIntent } so you can inspect why a match was selected. Perfect for logging, testing, and tuning your cache.
Advanced Example:
const cache = new IntentCache({ threshold: 0.6 })
cache.set("get user profile", { name: "Mika" })
cache.set("retrieve user info", { name: "Mika", role: "admin" })
cache.set("load account data", { accountId: 1234 })
// Query with fuzzy matching
console.log(cache.get("fetch profile", { debug: true }))
// Output:
{
value: { name: "Mika", role: "admin" },
score: 0.83,
matchedIntent: "retrieve user info"
}
// Query that doesn’t match anything
console.log(cache.get("unknown intent", { debug: true }))
// Output: { value: undefined, score: 0, matchedIntent: undefined }When to Use IntentCache
Chatbots / conversational AI
API request caching
Caching LLM / AI agent responses
Edge computing (Cloudflare Workers / Bun)
NODE: It’s especially useful when exact string matching is too brittle and you want your cache to understand the meaning of what users are asking.
Developer Notes
Written in TypeScript, fully typed.
Compatible with Node.js and Bun.
Uses tokenization, synonyms, and weighted scoring for fast, semantic matching.
Thresholds can be tuned for strictness vs flexibility.
Debug mode helps you tune the cache for your specific application.
Next Steps / Suggestions
Add TTL (time-to-live) support for cache entries;
Extend synonyms dynamically;
Integrate with AI embeddings for more advanced semantic matching;
Add automatic persistence (optional JSON/DB storage);
Contributing
Contributions are welcome! Fork, test, and submit PRs. Feel free to add more synonyms, token weights, or optimizations.
