npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@josheverett/bullshit-detector

v1.0.0

Published

AI-powered fact-checking and bullshit detection for Node.js applications

Readme

@josheverett/bullshit-detector

AI-powered fact-checking and bullshit detection for Node.js applications. A generic package for any project requiring LLM-based fact verification.

Overview

This package provides an efficient way to detect misinformation in text using OpenAI's language models. It extracts multiple factual claims from input text and evaluates each one in a single LLM call for maximum efficiency. Perfect for real-time applications, content moderation, or any application that needs automated fact-checking.

Features

  • Multi-Statement Detection: Extracts and evaluates ALL factual claims in text, not just one
  • Single Function API: Simple detectBullshit(input, config?) function
  • Dual Input Support: Accepts either plain strings or OpenAI-formatted message arrays
  • Single LLM Call: Extracts facts and evaluates them in one efficient operation
  • Configurable: Customize OpenAI model, temperature, and token limits
  • Structured Output: Returns consistent JSON arrays with bullshit levels, confidence scores, and reasoning
  • TypeScript Support: Full TypeScript types and interfaces
  • Generic Design: Works with any Node.js project, not tied to specific use cases

Installation

npm install @josheverett/bullshit-detector

Setup

Set your OpenAI API key as an environment variable:

export OPENAI_API_KEY="your-openai-api-key"

Quick Start

Basic String Analysis

import { detectBullshit } from '@josheverett/bullshit-detector';

const result = await detectBullshit("The Earth has 27 billion people and the moon is made of cheese.");

console.log(result);
// [
//   {
//     transcript: "The Earth has 27 billion people and the moon is made of cheese.",
//     claim: "The Earth has 27 billion people",
//     summary: "Claims about Earth's population and moon's composition",
//     bullshitLevel: 5,
//     confidence: 5,
//     reasoning: "The actual population is approximately 8 billion, not 27 billion",
//     truth: "The Earth has approximately 8 billion people"
//   },
//   {
//     transcript: "The Earth has 27 billion people and the moon is made of cheese.",
//     claim: "The moon is made of cheese",
//     summary: "Claims about Earth's population and moon's composition",
//     bullshitLevel: 5,
//     confidence: 5,
//     reasoning: "The moon is composed primarily of rock and dust, not cheese",
//     truth: "The moon is a rocky celestial body composed mainly of silicate minerals"
//   }
// ]

With Configuration Options

import { detectBullshit, BullshitDetectionConfig } from '@josheverett/bullshit-detector';

const config: BullshitDetectionConfig = {
  model: 'gpt-4.1-2025-04-14', // Use the latest model
  temperature: 1,            // Model default temperature
  maxTokens: 2000           // Allow longer responses
};

const result = await detectBullshit("Some complex text to analyze", config);

OpenAI Message Array Analysis

import { detectBullshit } from '@josheverett/bullshit-detector';

const messages = [
  { role: 'user', content: 'I read that vaccines contain microchips for tracking' },
  { role: 'assistant', content: 'That is not accurate. Can you tell me more about where you heard this?' },
  { role: 'user', content: 'Well, I saw it on social media. Also, did you know the moon landing was faked?' }
];

const results = await detectBullshit(messages);
// Analyzes the most recent user message for all factual claims

API Reference

detectBullshit(input, config?)

Main function for bullshit detection.

Parameters:

  • input: string | OpenAIMessage[] - Either a text string or array of OpenAI-formatted messages
  • config?: BullshitDetectionConfig - Optional configuration object

Returns: Promise<BullshitDetectionResult[]> - Array of detection results, one per factual claim

BullshitDetectionResult

interface BullshitDetectionResult {
  transcript: string;      // The input text that was analyzed
  claim: string;          // The specific factual statement evaluated
  summary: string;        // A concise summary of the input
  bullshitLevel: number;  // 0-5 scale (0 = no bullshit, 5 = maximum bullshit)
  confidence: number;     // 0-5 scale confidence in the evaluation
  reasoning: string;      // Explanation of why this level was assigned
  truth: string;         // The accurate facts or corrected information
}

BullshitDetectionConfig

interface BullshitDetectionConfig {
  model?: string;         // OpenAI model to use (default: 'gpt-4.1-2025-04-14')
  temperature?: number;   // Temperature for LLM calls (default: 0)
  maxTokens?: number;    // Maximum tokens in response (default: 1500)
}

OpenAIMessage

interface OpenAIMessage {
  role: 'system' | 'user' | 'assistant';
  content: string;
}

Legacy Class-Based API

For applications that prefer a class-based approach, the original interface is still available:

import { BullshitDetector } from '@josheverett/bullshit-detector';

const detector = new BullshitDetector();
const evaluations = await detector.analyzeTranscript("Some text to analyze");
// Returns StatementEvaluation[] (similar to BullshitDetectionResult but without transcript/summary)

Error Handling

The function throws descriptive errors for common issues:

try {
  const results = await detectBullshit("Some text");
} catch (error) {
  if (error.message.includes('OPENAI_API_KEY')) {
    console.error('Please set your OpenAI API key');
  } else if (error.message.includes('No factual claims found')) {
    console.log('Input contains no verifiable factual statements');
  } else {
    console.error('Detection failed:', error.message);
  }
}

Common Use Cases

Real-time Applications

Perfect for real-time fact-checking in various applications:

// In a real-time pipeline
const transcriptResults = await detectBullshit(userInput);
if (transcriptResults.some(r => r.bullshitLevel > 3)) {
  // Handle misinformation appropriately
}

Content Moderation

Analyze user-generated content for factual accuracy:

const contentResults = await detectBullshit(userPost);
contentResults.forEach(result => {
  if (result.bullshitLevel >= 4 && result.confidence >= 4) {
    flagForReview(result);
  }
});

Educational Tools

Help students identify misinformation in texts:

const analysisResults = await detectBullshit(studentEssay);
// Show corrections and reasoning to help learning

License

MIT