lumensyntax-mcp
v1.0.1
Published
LumenSyntax MCP Server - Truth verification framework for LLMs
Downloads
165
Maintainers
Readme
LumenSyntax MCP Server
Truth verification framework for LLMs via the Model Context Protocol.
NPM: https://www.npmjs.com/package/lumensyntax-mcp
Migration Notice: This package was previously published as
truthsyntax-mcp. Please update your installations.
Installation
npm install -g lumensyntax-mcpOr use directly with npx:
npx lumensyntax-mcpClaude Desktop Configuration
Add to your Claude Desktop config (~/.config/claude/claude_desktop_config.json):
{
"mcpServers": {
"lumensyntax": {
"command": "npx",
"args": ["lumensyntax-mcp"]
}
}
}Or if installed globally:
{
"mcpServers": {
"lumensyntax": {
"command": "lumensyntax-mcp"
}
}
}Tools
verify_claim
Verifies a factual claim using semantic entailment and linguistic analysis.
Parameters:
{
"claim": "Water boils at 100 degrees Celsius at sea level",
"sources": ["Optional array of source texts to verify against"]
}Returns:
{
"decision": "VERIFIED",
"groundingScore": 0.95,
"uncertaintyScore": 0.05,
"coherenceScore": 0.9,
"entailmentScores": [0.98]
}Decisions:
VERIFIED- Claim is well-groundedUNVERIFIED- Claim lacks supportNEEDS_REVIEW- Requires human judgment
verify_response
Analyzes a complete LLM response by extracting and verifying individual claims.
Parameters:
{
"response": "The full LLM response text to analyze",
"context": "Optional background context"
}Returns: Array of claims with individual verification status
detect_fallacy
Detects logical fallacies in arguments (formal and informal).
Parameters:
{
"argument": "Either you support this policy or you hate freedom."
}Returns:
{
"valid": false,
"fallacies": [
{
"type": "FALSE_DILEMMA",
"confidence": 0.9,
"explanation": "Presents only two options when more exist",
"category": "INFORMAL"
}
],
"recommendation": "Consider restructuring to present more options."
}Detected Fallacy Types:
- Ad Hominem
- Straw Man
- False Dilemma
- Slippery Slope
- Appeal to Authority
- Circular Reasoning
- Red Herring
- Hasty Generalization
- False Cause
- Appeal to Emotion
- Bandwagon
- Tu Quoque
test_hypothesis
Evaluates hypotheses for scientific rigor and falsifiability.
Parameters:
{
"hypothesis": "Homeopathy cures diseases through water memory",
"domain": "medicine"
}Returns:
{
"hypothesis": "...",
"type": "CAUSAL",
"status": "FRINGE",
"falsifiable": true,
"falsificationCriteria": [
"Double-blind placebo-controlled trials showing no effect beyond placebo"
],
"testableExperiments": [
"Randomized controlled trial comparing homeopathic remedy to placebo"
],
"epistemicScore": 0.15,
"analysisNotes": "Contradicts established physics and chemistry..."
}Epistemic Status:
ESTABLISHED- Scientific consensusCONTESTED- Ongoing debateSPECULATIVE- Early stageFRINGE- Outside mainstreamUNFALSIFIABLE- Cannot be tested
seek_consensus
Verifies a claim against multiple sources to find agreement/disagreement.
Parameters:
{
"claim": "Climate change is primarily caused by human activity",
"sources": ["Source 1 text", "Source 2 text", "Source 3 text"]
}Returns:
{
"claim": "...",
"consensus": "AGREEMENT",
"groundingScore": 0.95,
"contradictionsFound": false,
"sourceCount": 3,
"agreementRatio": 1.0
}classify_disagreement
Categorizes disagreements into actionable types using the Logos Oracle framework.
Parameters:
{
"claim": "The universe had a beginning"
}Returns:
{
"classification": "MYSTERY",
"explanation": "Fundamental question at the boundary of empirical knowledge"
}Classifications:
LOGICAL_ERROR- Can be resolved with correct reasoningMYSTERY- Fundamental unknown at boundaries of knowledgeGAP- Missing information that could be discovered
How It Works
LumenSyntax MCP combines multiple verification approaches:
┌─────────────────────────────────────────────────────────────┐
│ LUMENSYNTAX ENGINE │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ DeBERTa │ │ Fallacy │ │ Hypothesis │ │
│ │ NLI Model │ │ Detector │ │ Tester │ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────┐ │
│ │ EVC Matrix │ │
│ │ (Decision Logic) │ │
│ └─────────────────────┘ │
│ │ │
│ ┌────────────────┼────────────────┐ │
│ ▼ ▼ ▼ │
│ VERIFIED UNVERIFIED NEEDS_REVIEW │
│ │
└─────────────────────────────────────────────────────────────┘Components:
- DeBERTa NLI - Semantic entailment checking (Xenova/transformers)
- Pattern Matching - Fallacy detection via linguistic patterns
- Popperian Analysis - Hypothesis falsifiability testing
- Uncertainty Detection - Hedging language identification
- EVC Matrix - Decision logic for verification outcomes
Example Session
User: Can you verify this claim: "Vaccines cause autism"?
Claude: [Uses verify_claim]
LumenSyntax Result:
{
"decision": "UNVERIFIED",
"groundingScore": 0.02,
"analysis": "This claim contradicts extensive scientific research.
Multiple large-scale studies have found no link between
vaccines and autism. The original study claiming this
link was retracted and its author lost his medical license."
}Use Cases
- Fact-checking - Verify claims in articles or conversations
- Argument analysis - Detect logical fallacies in debates
- Research review - Evaluate scientific hypotheses
- Content moderation - Flag potentially false information
- Education - Teach critical thinking skills
Related Projects
- TruthGit - Multi-validator truth verification CLI (Python)
pip install truthgit
- create-lumensyntax - Ecosystem installer
npx create-lumensyntax
License
MIT License - LumenSyntax 2025
Links
- NPM: https://www.npmjs.com/package/lumensyntax-mcp
- GitHub: https://github.com/lumensyntax-org
- Website: https://lumensyntax.dev
