npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

o-reasonable

v0.0.7

Published

O-Reasonable is a lightweight reasoning agent designed to mimic logical planning and problem-solving capabilities using cost-effective OpenAI models. It generates step-by-step plans, executes them sequentially, and synthesizes a final answer, making it a

Readme

O-Reasonable 🧠

A lightweight reasoning agent designed to mimic logical planning and problem-solving capabilities using cost-effective OpenAI models. It generates step-by-step plans, executes them sequentially, and synthesizes a final answer.

[demo]

Features

  • 🎯 Dynamic model selection with sensible defaults
  • 📋 Structured step-by-step planning
  • 🔄 Sequential execution of reasoning steps
  • 🎨 Clean and informative console output
  • ⚡ Built with TypeScript and Vite

Installation

npm install o-reasonable

Configuration

You'll need to set up your OpenAI API key as an environment variable:

export OPENAI_API_KEY=your-api-key-here

Or create a .env file in your project root:

OPENAI_API_KEY=your-api-key-here

Usage

import { runAgent } from 'o-reasonable';

// Basic usage with default model (logs enabled by default)
const result = await runAgent("What would be the impact of implementing a four-day work week?");

// Using a custom model with logs disabled
const result = await runAgent("Analyze the pros and cons of remote work", {
  model: "gpt-4o-mini",
  enableLogs: false
});

// The result contains steps and final answer
console.log(result.steps);         // Array of step results
console.log(result.finalQuestion); // The final question asked
console.log(result.finalAnswer);   // The final synthesized answer

Configuration Options

The runAgent function accepts a configuration object with the following options:

interface OReasonableConfig {
  model?: string;         // OpenAI model to use (default: "gpt-4o-mini")
  apiKey?: string;        // Optional API key override
  baseURL?: string;       // Custom base URL for OpenAI-compatible APIs
  enableLogs?: boolean;   // Enable/disable console logs (default: false)
  enableReflection?: boolean; // Enable step validation and reflection (default: true)
  minConfidence?: number; // Minimum confidence threshold for steps (default: 0.6)
  maxRetries?: number;    // Maximum retries for low-confidence steps (default: 1)
}

Enhanced Reasoning Features

O-Reasonable now includes advanced reasoning capabilities that significantly improve the quality and reliability of the generated responses:

🔍 Step Validation & Reflection

Each reasoning step is automatically validated for:

  • Confidence: How certain the AI is about the step's accuracy
  • Relevance: How well the step contributes to solving the original task
  • Logical soundness: Whether the reasoning is well-founded

🧠 Smart Context Management

  • Context Summarization: Automatically summarizes previous steps to maintain focus
  • Conversation Threading: Builds coherent conversation history between steps
  • Relevance Filtering: Keeps only the most important information for subsequent steps

🔄 Adaptive Execution

  • Quality Thresholds: Automatically retries steps that don't meet confidence requirements
  • Enhanced Planning: Uses proven reasoning frameworks (analytical, comparative, causal, creative)
  • Self-Evaluation: Reflects on the overall reasoning process quality

📊 Confidence Tracking

  • Individual step confidence scores
  • Overall reasoning confidence
  • Quality metrics for each step
// Enhanced reasoning with validation
const result = await runAgent("Analyze the impact of remote work on productivity", {
  enableReflection: true,    // Enable step validation and reflection
  minConfidence: 0.7,        // Require high confidence (0.0-1.0)
  maxRetries: 2,             // Retry low-confidence steps up to 2 times
  enableLogs: true
});

// Access enhanced results
console.log(result.steps);              // Array of StepResult objects with confidence scores
console.log(result.overallConfidence);  // Overall reasoning confidence (0.0-1.0)
console.log(result.reflections);        // Self-evaluation of reasoning quality

Step Result Structure

interface StepResult {
  content: string;      // The step's reasoning content
  confidence: number;   // Confidence score (0.0-1.0)
  relevance: number;    // Relevance to original task (0.0-1.0)
  isValid: boolean;     // Whether the step passed validation
  reasoning: string;    // Explanation of the validation
}

Using OpenAI-Compatible APIs

O-Reasonable supports any OpenAI-compatible API by specifying a custom baseURL. Here are some examples:

Local Models (Ollama, LM Studio, etc.)

import { runAgent } from 'o-reasonable';

// Using Ollama with a local model
const result = await runAgent("Explain quantum computing", {
  baseURL: "http://localhost:11434/v1",
  model: "llama2", // or any model available in Ollama
  apiKey: "ollama" // Ollama requires any non-empty key
});

// Using LM Studio
const result = await runAgent("What are the benefits of TypeScript?", {
  baseURL: "http://localhost:1234/v1",
  model: "local-model",
  apiKey: "lm-studio"
});

Azure OpenAI

const result = await runAgent("Analyze market trends", {
  baseURL: "https://your-resource.openai.azure.com",
  model: "gpt-35-turbo", // Azure deployment name
  apiKey: process.env.AZURE_OPENAI_API_KEY
});

Other OpenAI-Compatible Providers

// Example with any OpenAI-compatible service
const result = await runAgent("Create a project plan", {
  baseURL: "https://api.your-provider.com/v1",
  model: "your-model-name",
  apiKey: "your-api-key"
});

Return Type

The runAgent function returns a promise that resolves to an enhanced AgentResult:

interface AgentResult {
  steps: StepResult[];      // Enhanced step results with confidence scores
  finalQuestion: string;    // The final question that was asked
  finalAnswer: string;      // The synthesized final answer
  overallConfidence: number; // Overall confidence in the reasoning (0.0-1.0)
  reflections: string[];    // Self-evaluation and reflection insights
}

Examples

Quick Start

# Try the quick example
node quickstart.js

Comprehensive Examples

# Run all examples (JavaScript)
node example.js

# Run TypeScript examples (requires ts-node)
npx ts-node example.ts

# Try different configurations
node examples/configurations.js

Basic Usage

import { runAgent } from 'o-reasonable';

const result = await runAgent("What are the key factors in choosing a programming language?");
console.log(result.finalAnswer);
console.log(`Confidence: ${result.overallConfidence.toFixed(2)}`);

Advanced Reasoning Configuration

// High-quality reasoning with strict validation
const result = await runAgent("Design a sustainable urban transportation system", {
  enableReflection: true,
  minConfidence: 0.8,      // Require very high confidence
  maxRetries: 3,           // Allow more retries for quality
  enableLogs: true,
  model: "gpt-4"
});

// Analyze step quality
result.steps.forEach((step, i) => {
  console.log(`Step ${i+1}: Confidence ${step.confidence.toFixed(2)}, Relevance ${step.relevance.toFixed(2)}`);
});

Debugging and Analysis

const result = await runAgent("Solve this complex business problem", {
  enableLogs: true,        // See detailed reasoning process
  enableReflection: true   // Get self-evaluation insights
});

// Review the reasoning process
console.log("Reflections on reasoning quality:");
result.reflections.forEach(reflection => console.log(reflection));

Development

To set up the development environment:

git clone https://github.com/chihebnabil/o-reasonable.git
cd o-reasonable
npm install
npm run dev

Building

npm run build

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

MIT

Credits

Created with ❤️ by Chiheb Nabil