lcm-audit
v0.3.11
Published
LCM SDK — Verifiable Decision Layer for AI Systems
Downloads
2,916
Maintainers
Readme
LCM — Verifiable Decision Layer for AI Systems
AI decisions are invisible. LCM makes them provable.
Deterministic, replayable, and cryptographically sealed proof for every policy-bound AI decision.
Install
npm install lcm-audit
Quick Start
import { LCM } from "lcm-audit";
LCM.init({ apiKey: "your-api-key" });
const input = { action: "refund", orderId: "ORD-938472", amount: 129.00, currency: "USD", reason: "defective_item" };
const decision = LCM.decide(input);
const sealed = await LCM.seal(decision);
console.log("\n🔍 Open Proof:"); console.log(sealed.traceUrl); console.log(sealed.derivation_root);
const verified = LCM.verify(sealed); console.log(verified.valid);
const replayed = await LCM.replay(sealed.audit_id);
Get Pro Access
Restore trust in AI decisions.
Unlock full audit history and verifiable proof.
https://www.lcm3.com/apply.html
Why LCM
AI systems can act.
But when they approve a refund, cancel an order, authorize a payment, or make a decision:
Can you prove exactly why it happened?
If not:
- accountability becomes unclear
- audits become difficult
- disputes become expensive
- trust becomes fragile
LCM solves this. It turns decisions into proof artifacts.
How It Works
Decision → Seal → Verify → Replay → Detect
Seal → cryptographic commitment Verify → integrity check Replay → decision reconstruction Detect → tamper detection
Example
Original:
{ "action": "refund", "amount": 129.00 }
Modified:
{ "action": "refund", "amount": 1290.00 }
Result:
Tampered Replay Failed Chain Invalid
See it fail in real-time:
https://www.lcm3.com/explorer.html
Try LCM in 30 Seconds
See what happens when an AI decision is tampered.
https://www.lcm3.com/sandbox.html
No setup required.
Inspect a Real Decision
View a full decision proof, chain integrity, and audit trace:
https://www.lcm3.com/explorer.html
Support
https://github.com/paibyun9/LCM-a2a-spec/issues
