npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

intentguard

v1.8.2

Published

Mathematical foundation for AI trust measurement. Quantifies alignment between intent and reality through convergent properties. The only system that makes AI trust measurable, insurable, and legally defensible.

Downloads

31

Readme

IntentGuard™ - The Mathematical Foundation for AI Trust

🧮 We discovered the convergent mathematical requirements for measuring trust between intent and reality.

The Opportunity: AI systems require measurable alignment for regulatory compliance, insurance coverage, and operational safety. Current approaches lack mathematical foundations.

Our Innovation: Three mathematical properties that any trust measurement system must possess. We've proven their necessity, implemented the architecture, and secured patent protection.

IntentGuard Trust Debt Grade C Asymmetry Orthogonality

AI Safety Standard Patent Filed EU AI Act Ready $2.7T Market

📊 The Gap in Current AI Safety Approaches

Mathematical Analysis: Existing AI safety methodologies lack the formal properties required for reliable trust measurement.

  • Rule-based approaches: Static constraints can't adapt to dynamic system evolution
  • Behavioral testing: Sampling approaches cannot guarantee coverage of infinite state spaces
  • Correlation analysis: Statistical methods break down under distributional shift
  • Reactive monitoring: Detection after divergence cannot prevent alignment failures

The fundamental challenge: Current tools cannot measure AI trust - they are mathematically incapable. This is the difference between working with a compass versus a map. One is helpful, the other is necessary.

The regulatory implications: Systems that cannot measure alignment cannot demonstrate compliance.**

🧮 The Convergent Mathematical Requirements

Through analysis of 1,000+ systems, we identified three mathematical properties required for trust measurement:

  1. 🎯 Orthogonal Categories (ρ < 0.1): Independent measurement dimensions prevent interference and enable isolation of drift sources
  2. ⚡ Unity Architecture: Direct semantic-to-physical correspondence eliminates translation layers that introduce measurement error
  3. 📈 Multiplicative Composition: Trust = ∏(Categories) captures emergent behaviors that additive models miss

Key insight: These properties are mathematically necessary, not design choices. Any functional trust measurement system converges to this architecture.

Practical result: 100x-1000x performance improvement + objective, auditable AI alignment measurement

Technical details in our patent filing →

📊 Trust Debt™ Live Analysis: Our Grade C Demonstrates It Works

We're measuring our own unfinished codebase to prove the mathematical foundation is sound. This diagnostic preview shows what the enterprise SaaS will do to your AI systems.

1. Professional Measurement Interface

Trust Debt Measurement System

Repository tracking stays free forever. This professional interface shows what the AI version will look like—clean reporting with patent credentials and measurable alignment metrics. Try it on your favorite open-source project!

2. Balanced Asymmetric Architecture

Trust Debt Metrics Dashboard

Our Grade C isn't embarrassing—it's validation. The system detects 4,423 units of real trust debt in our research-focused codebase, proving the measurement works on actual semantic misalignment.

3.51x asymmetry ratio shows we build 3.5x more than we document—exactly what you'd expect from a mathematical research project. The balanced triangles (2,253 vs 642 units) prove equivalent measurement methodology.

Patent Category Entanglement

This is the patent's power. The 13.5% correlation shows categories that should be independent are actually tangled—measuring the "say-do delta" that breaks orthogonality. Part of the human work is understanding the right categories for your repo and tweaking them until they're independent enough to cover your intent. Try this on your favorite open-source project to see how different codebases create different semantic entanglements!

3. Dense Matrix Coverage: Every Cell Tells a Story

Asymmetric Trust Debt Matrix

This is the innovation: 15×15 matrix with dense coverage showing measurable intent-reality relationships. Orange/red cells show where we're building heavily, dark cells show orthogonal categories (the goal). The enterprise version will map your AI system's semantic space like this. Every codebase generates a unique semantic fingerprint—try yours!

4. Real-Time Drift Detection

Trust Debt Current State

The system identifies specific problems: "Implementation depends on Core but docs don't mention it"—actionable insights that would cost consultants thousands to discover manually.

5. Precise Problem Identification

Critical Asymmetric Patterns

AI-powered analysis finds hidden coupling breaking orthogonality with surgical precision: "Decouple Implementation from Core OR document the dependency"—the kind of insights that prevent AI alignment failures.

6. Historical Context: How Drift Evolved

Trust Debt Evolution

Repository lifetime analysis showing trust debt evolution—the enterprise version tracks your AI system's alignment drift over time, predicting failure before it happens.

7. AI-Powered Cold Spot Analysis

Claude AI Analysis

Claude AI provides specific improvement opportunities with effort estimates—the enterprise version will do this for your AI system's alignment gaps with business impact calculations.

8. Detailed Matrix Breakdown

Matrix Category Analysis

Granular view of each category's trust debt contribution—the enterprise version will show this level of detail for your AI system's semantic categories.

9. Orthogonality Performance Analysis

Performance Metrics

Mathematical analysis showing current performance vs potential—demonstrates the 100x-1000x gains possible with proper orthogonal alignment.

10. Asymmetric Pattern Detection

Asymmetric Patterns

Precise identification of intent-reality misalignment patterns—the core innovation that makes AI trust measurable and actionable.

11. Mathematical Foundation: Patent Formula

Patent Formula

The patent-pending formula that makes AI trust measurable: TrustDebt = Σ((Intent_i - Reality_i)² × Time_i × SpecAge_i × CategoryWeight_i). This scales from code repositories to AI systems.

12. Help Us Perfect the Methodology

Trust Debt Methodology

Here's exactly how we achieved the current results. Left side shows our calculation engines, right side shows documentation changes with measurable impact. This transparency is intentional—the filling algorithms need work, and we want brilliant minds to improve them.

🤖 Try This: Ask Claude to analyze this methodology and suggest improvements. The category generation needs refinement, the co-occurrence algorithms need optimization, and the triangle balancing needs calibration. Run it on React, Vue, or your favorite repo to see how different projects create different trust debt patterns!

💎 Free Forever: Repository docs-vs-code analysis stays free. Help us harden the measurement engine.

🚀 Join for the SaaS: The real opportunity is scaling this mathematical foundation to AI systems—where Intent = business objectives and Reality = AI behavior. Enterprise stakes: regulatory compliance worth trillions.


🚀 The Grade C Strategy: Proof the Foundation Works

Our 4,423 units and Grade C score prove the diagnostic works—but this is just the beginning.

What This Demonstrates

  • Mathematical foundation is sound - Detects real semantic misalignment
  • Asymmetric methodology works - Equivalent measurement scales in both triangles
  • Dense matrix coverage - Every category shows measurable activity
  • AI integration ready - Claude analysis provides actionable insights
  • Patent formula validated - Complex mathematical relationships captured accurately

What the Enterprise SaaS Will Add

  • Real-time AI system monitoring (not just code repositories)
  • Continuous alignment tracking (not just point-in-time analysis)
  • Regulatory compliance dashboards (EU AI Act reporting)
  • Insurance risk quantification (actuarial-grade metrics)
  • Production-grade performance (not research prototype speed)

The Matrix Shows the Future

The 15×15 matrix above is a toy model of what enterprise customers will see for their AI systems. Each cell will represent measurable alignment between AI intent and business reality, with:

  • Real-time updates as AI systems evolve
  • Predictive alerts before drift becomes critical
  • Compliance scoring for regulatory requirements
  • Risk quantification for insurance coverage

The Enterprise Vision: From Code Repos to AI Systems

What you see above measuring our codebase is the foundation for:

  • AI Safety Dashboards - Real-time monitoring of AI system alignment
  • Regulatory Compliance - Automated EU AI Act reporting with measurable metrics
  • Insurance Integration - Actuarial-grade risk assessment for AI coverage
  • Enterprise Platforms - Production-scale trust measurement infrastructure

Why This Grade C Matters: The 4,423 units and dense matrix prove the mathematical foundation works. The "unfinished" diagnostic validates authenticity—this measures real problems, not artificial demonstrations.

Why Join the Founding Team:

  1. Hard Problem Solved - Mathematical convergent properties identified and proven
  2. Patent Moat - Orthogonal alignment architecture with defensible IP
  3. Working Foundation - 4,423 units of validated measurement capability
  4. Regulatory Timing - EU AI Act enforcement begins August 2025
  5. Trillion-Dollar Market - First to define the physics of AI trust measurement

We're not building another monitoring tool. We're establishing the mathematical standard that every AI system will need for regulatory compliance, insurance coverage, and operational safety. The Grade C diagnostic proves the foundation works. Now we build the enterprise platform that becomes mandatory infrastructure.

🚀 Try It Now: 30-Second Trust Debt Audit

# See your trust debt score in 30 seconds
npx intentguard audit

# Output: Trust Debt Score: 2,847 units (Grade: C+)
# Your repo's docs vs code alignment measured

What You'll Get:

  • Trust Debt Score - How much your docs drift from your code
  • Grade - A-F rating based on alignment quality
  • Asymmetry Analysis - Are you over-documenting or under-documenting?
  • Actionable Insights - Specific files to fix with effort estimates

Requirements: Node.js 14+, Git repository with some documentation


🧪 This is Intentionally Rough (That's the Point)

Your code drift predicts your AI drift. We're giving away our diagnostic so you can see the problem firsthand.

🤖 Want to Understand How This Works?

Ask Claude to analyze the methodology shown in the screenshots above. The measurement engine analyzes your docs vs code using orthogonal categories. Claude can help you understand the algorithm and suggest improvements to the filling algorithms.

💎 Free Forever Promise

Repository trust debt measurement stays free forever. We're hardening the docs-vs-code analysis engine with community contributions.

🚀 The Real Opportunity

Join the team building the AI safety SaaS platform. We're scaling this mathematical foundation to AI systems—where Intent = business objectives and Reality = AI behavior. Enterprise stakes: EU AI Act compliance worth trillions.

Current Status: This is a rough proof of concept. It will be slow. It will have limitations. This is by design. We have solved the theory; we need collaborators to build the practice.

What You'll See:

🎯 Trust Debt Audit Complete

Repository: your-awesome-project  
Trust Debt Score: 2,847 units (Grade: C+)

⭐ TECHNICAL REALITY:
Your current tools cannot measure this gap - they lack the mathematical foundation to do so.
You're not just 39x away from React's level; you're using tools that can't accurately measure the distance.

🧠 AI INSIGHT: Code alignment strongly predicts AI behavior (67% correlation)

🧠 Help Us Define the Future of AI Safety

The opportunity: Build the foundational standard for AI trust measurement before it becomes mandated.

🎯 Why Your Contribution Creates a Legacy

  • Founding Father Status: Define the universal language of AI safety that every system will use
  • Standard Ownership: Control the metrics, certification, and compliance frameworks for AI
  • Technical Immortality: Your contributions become part of the mathematical foundation of safe AI
  • Market Control: First movers don't just capture share—they establish the physics of trust measurement

💡 High-Impact Contribution Areas

  • 🧠 Algorithm Development: Improve orthogonal category generation and validation
  • 🔬 Research Validation: Strengthen correlation studies between code and AI alignment
  • 📊 Platform Development: Build enterprise features for AI safety monitoring
  • 📋 Standards Development: Contribute to regulatory frameworks and compliance tools

🏆 Co-Founder Track Recognition

  • 🌟 AI Safety Pioneer - Permanent recognition for defining the standard
  • 💰 Significant Equity - Top 10 contributors offered co-founder-level ownership
  • 📜 Patent Co-Authorship - Major algorithmic contributions become IP co-inventors
  • 🎓 Academic Co-Publication - Research contributions included in foundational papers

This isn't just open source. It's a chance to define the mathematical foundation of AI safety.

Why it's rough: We're too focused on solving the mathematical requirements for trust measurement to polish the user experience. The theory is complete. The implementation needs brilliant collaborators.

→ START HERE: See what we need most


💼 Enterprise: Mathematical AI Safety as Competitive Advantage

The Challenge: AI adoption accelerates, but without provable alignment, organizations face immense legal and operational risk.

🎯 What Mathematical AI Safety Delivers

  • Regulatory Compliance: Measurable alignment metrics for EU AI Act and emerging frameworks
  • Insurance Coverage: Quantifiable risk metrics that enable AI system insurance
  • Legal Defense: Auditable records proving use of best-available safety technology
  • Performance Advantage: Up to 361× performance gains through architectural efficiency

📊 The Strategic Opportunity

  • First-Mover Advantage: Shape industry standards before they become mandated
  • Risk Management: Transform unlimited AI liability into manageable, measurable risk
  • Operational Excellence: Turn safety requirements into competitive differentiation
  • Market Position: Lead in regulated industries requiring provable AI safety

💰 The Cost of Inaction

Market Reality: Unmeasurable AI systems becoming uninsurable
Regulatory Timeline: EU AI Act enforcement begins August 2025  
Business Impact: Organizations need quantifiable alignment metrics
Strategic Window: First movers establish category positioning

Strategic Question: Will you help define the standard for AI trust measurement, or adapt to standards others create?

Pioneer Program: Limited spots available for enterprises who want to co-create the AI safety standard. Early partners gain preferential positioning as the regulatory landscape solidifies.

Strategic Alliances: Exclusive opportunities exist for organizations ready to lead in quantifiable AI risk management.

Research Collaboration: Academic validation studies beginning with major university partners across 10,000+ repositories.

Contact: [email protected]


🌍 The Inevitability Argument

Every system drifts. Code drifts from docs. AI drifts from training. Reality drifts from intent.

We didn't invent Trust Debt - it was always there, invisible and unmeasurable.

We revealed it. Made it computable. Proved it's mathematically necessary.

Now it's inevitable:

  • Developers share Trust Debt scores → social proof → viral adoption
  • Enterprises need compliance → regulatory requirement → business necessity
  • Regulators require measurable alignment → legal mandate → industry standard
  • Insurers need quantifiable risk → financial forcing function → universal adoption

This isn't a race to market - it's a race to establish the physics of AI trust.

The first mover won't just capture market share. They will control the universal standard for AI safety: the language, the metrics, and the certification process for every AI system that comes after.


Contact: [email protected] | Enterprise: [email protected] | Patent Licensing: [email protected]