npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

hyntx

v2.7.0

Published

CLI that analyzes Claude Code prompts and generates improvement suggestions

Readme

Hyntx

Hyntx is a CLI tool that analyzes your Claude Code prompts and helps you become a better prompt engineer through retrospective analysis and actionable feedback.

npm version License: MIT Node.js

🚧 NOT READY FOR USE: This project is under active development. The published npm package does not produce output yet. Check back for updates.

What is Hyntx?

Hyntx reads your Claude Code conversation logs and uses AI to detect common prompt engineering anti-patterns. It provides you with:

  • Pattern detection: Identifies recurring issues in your prompts (missing context, vague instructions, etc.)
  • Actionable suggestions: Specific recommendations with concrete "Before/After" rewrites
  • Privacy-first: Automatically redacts secrets and defaults to local AI (Ollama)
  • Zero configuration: Interactive setup on first run with auto-save to shell config

Think of it as a retrospective code review for your prompts.

Features

  • Offline-first analysis with local Ollama (privacy-friendly, cost-free)
  • Multi-provider support: Ollama (local), Anthropic Claude, Google Gemini with automatic fallback
  • Before/After rewrites: Concrete examples showing how to improve your prompts
  • Automatic secret redaction: API keys, emails, tokens, credentials
  • Flexible date filtering: Analyze today, yesterday, specific dates, or date ranges
  • Project filtering: Focus on specific Claude Code projects
  • Multiple output formats: Beautiful terminal output or markdown reports
  • Watch mode: Real-time monitoring and analysis of prompts as you work
  • Smart reminders: Oh-my-zsh style periodic reminders (configurable)
  • Auto-configuration: Saves settings to your shell config automatically
  • Dry-run mode: Preview what will be analyzed before sending to AI

Installation

NPM (Global)

npm install -g hyntx

NPX (No installation)

npx hyntx

PNPM

pnpm add -g hyntx

Quick Start

Run Hyntx with a single command:

hyntx

On first run, Hyntx will guide you through an interactive setup:

  1. Select one or more AI providers (Ollama recommended for privacy)
  2. Configure models and API keys for selected providers
  3. Set reminder preferences
  4. Auto-save configuration to your shell (or get manual instructions)

That's it! Hyntx will analyze today's prompts and show you improvement suggestions with concrete "Before/After" examples.

Usage

Basic Commands

# Analyze today's prompts
hyntx

# Analyze yesterday
hyntx --date yesterday

# Analyze a specific date
hyntx --date 2025-01-20

# Analyze a date range
hyntx --from 2025-01-15 --to 2025-01-20

# Filter by project name
hyntx --project my-awesome-app

# Save report to file
hyntx --output report.md

# Preview without sending to AI
hyntx --dry-run

# Check reminder status
hyntx --check-reminder

# Watch mode - real-time analysis
hyntx --watch

# Watch specific project only
hyntx --watch --project my-app

# Analysis modes - control speed vs accuracy trade-off
hyntx --analysis-mode batch      # Fast (default): ~300-400ms/prompt
hyntx --analysis-mode individual # Accurate: ~1000-1500ms/prompt
hyntx -m individual              # Short form

Combining Options

# Analyze last week for a specific project
hyntx --from 2025-01-15 --to 2025-01-22 --project backend-api

# Generate markdown report for yesterday
hyntx --date yesterday --output yesterday-analysis.md

# Deep analysis with individual mode for critical project
hyntx -m individual --project production-api --date today

# Fast batch analysis across date range
hyntx --from 2025-01-15 --to 2025-01-20 --analysis-mode batch -o report.md

# Watch mode with individual analysis (slower but detailed)
hyntx --watch -m individual --project critical-app

Configuration

Analysis Modes

Hyntx offers two analysis modes to balance speed and accuracy based on your needs:

Batch Mode (Default)

  • Speed: ~300-400ms per prompt
  • Best for: Daily analysis, quick feedback, large prompt batches
  • Accuracy: Good categorization for most use cases
  • When to use: Regular check-ins, monitoring prompt quality over time
hyntx                          # Uses batch mode by default
hyntx --analysis-mode batch    # Explicit batch mode

Individual Mode

  • Speed: ~1000-1500ms per prompt
  • Best for: Deep analysis, quality-focused reviews, important prompts
  • Accuracy: Better categorization and more nuanced pattern detection
  • When to use: Learning sessions, preparing critical prompts, detailed audits
hyntx --analysis-mode individual  # Use individual mode
hyntx -m individual               # Short form

Quick Mode Comparison

| Mode | Speed/Prompt | Use Case | Accuracy | When to Use | | ---------- | ------------ | -------------------------- | -------- | ----------------------------------------- | | Batch | ~300-400ms | Daily analysis, monitoring | Good | Quick feedback, large datasets | | Individual | ~1-1.5s | Deep analysis, learning | Better | Quality-focused reviews, critical prompts |

Speedup: Batch mode is 3-4x faster than individual mode.

Recommendation: Use batch mode (default) for daily analysis to get fast feedback. Switch to individual mode when:

  • You need detailed, nuanced feedback on each prompt
  • You're learning prompt engineering patterns
  • Analyzing high-stakes or complex prompts
  • Conducting quality audits or teaching sessions

Performance Note: Numbers based on llama3.2 on CPU. Actual speed varies by hardware, model size, and prompt complexity.

Detailed Guide: See Analysis Modes Documentation for comprehensive comparison, examples, and decision guidelines.

Rules Configuration

Hyntx allows you to customize which analysis rules are enabled and their severity levels through a .hyntxrc.json file in your project root.

Available Pattern IDs

  • vague - Detects vague requests lacking specificity
  • no-context - Detects missing background information
  • too-broad - Detects overly broad requests that should be broken down
  • no-goal - Detects prompts without a clear outcome
  • imperative - Detects commands without explanation

Configuration Options

For each pattern, you can:

  • Disable: Set enabled: false to skip detection
  • Override severity: Set severity to "low", "medium", or "high"

Example Configuration

Create .hyntxrc.json in your project root:

{
  "rules": {
    "imperative": {
      "enabled": false
    },
    "vague": {
      "severity": "high"
    },
    "no-context": {
      "severity": "high"
    },
    "too-broad": {
      "severity": "medium"
    }
  }
}

What Happens When Patterns Are Disabled

  • Filtered out: Disabled patterns are completely excluded from analysis results
  • No detection: The AI will not look for those specific issues
  • Updated stats: Pattern counts and frequency calculations exclude disabled patterns
  • Warning: If all patterns are disabled, you'll see a warning that no analysis will occur

How Severity Overrides Work

  • Changed priority: Patterns are sorted by severity (high → medium → low), then by frequency
  • Updated display: The reporter shows severity badges based on your configuration
  • No effect on detection: Severity only affects sorting and display, not whether the pattern is detected

Configuration Warnings

Hyntx will warn you about:

  • Invalid pattern IDs: If you specify a pattern ID that doesn't exist
  • All patterns disabled: If your configuration disables every pattern

These warnings appear immediately when the configuration is loaded.

Environment Variables

Hyntx uses environment variables for configuration. The interactive setup can auto-save these to your shell config (~/.zshrc, ~/.bashrc).

Multi-Provider Configuration

Configure one or more providers in priority order. Hyntx will try each provider in order and fall back to the next if unavailable.

# Single provider (Ollama only)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=llama3.2

# Multi-provider with fallback (tries Ollama first, then Anthropic)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=llama3.2
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here

# Cloud-first with local fallback
export HYNTX_SERVICES=anthropic,ollama
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_OLLAMA_MODEL=llama3.2

Provider-Specific Variables

Ollama:

| Variable | Default | Description | | -------------------- | ------------------------ | ----------------- | | HYNTX_OLLAMA_MODEL | llama3.2 | Model to use | | HYNTX_OLLAMA_HOST | http://localhost:11434 | Ollama server URL |

Anthropic:

| Variable | Default | Description | | ----------------------- | ------------------------- | ------------------ | | HYNTX_ANTHROPIC_MODEL | claude-3-5-haiku-latest | Model to use | | HYNTX_ANTHROPIC_KEY | - | API key (required) |

Google:

| Variable | Default | Description | | -------------------- | ---------------------- | ------------------ | | HYNTX_GOOGLE_MODEL | gemini-2.0-flash-exp | Model to use | | HYNTX_GOOGLE_KEY | - | API key (required) |

Reminder Settings

# Set reminder frequency (7d, 14d, 30d, or never)
export HYNTX_REMINDER=7d

Complete Example

# Add to ~/.zshrc or ~/.bashrc (or let Hyntx auto-save it)
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=llama3.2
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here
export HYNTX_REMINDER=14d

# Optional: Enable periodic reminders
hyntx --check-reminder 2>/dev/null

Then reload your shell:

source ~/.zshrc  # or source ~/.bashrc

AI Provider Setup

Ollama (Recommended)

Ollama runs AI models locally for privacy and cost savings.

  1. Install Ollama: ollama.ai

  2. Pull a model:

    ollama pull llama3.2
  3. Verify it's running:

    ollama list
  4. Run Hyntx (it will auto-configure on first run):

    hyntx

Anthropic Claude

  1. Get API key from console.anthropic.com

  2. Run Hyntx and select Anthropic during setup, or set manually:

    export HYNTX_SERVICES=anthropic
    export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here

Google Gemini

  1. Get API key from ai.google.dev

  2. Run Hyntx and select Google during setup, or set manually:

    export HYNTX_SERVICES=google
    export HYNTX_GOOGLE_KEY=your-google-api-key

Using Multiple Providers

Configure multiple providers for automatic fallback:

# If Ollama is down, automatically try Anthropic
export HYNTX_SERVICES=ollama,anthropic
export HYNTX_OLLAMA_MODEL=llama3.2
export HYNTX_ANTHROPIC_KEY=sk-ant-your-key-here

When running, Hyntx will show fallback behavior:

⚠️  ollama unavailable, trying anthropic...
✅ anthropic connected

Example Output

📊 Hyntx - 2025-01-20
──────────────────────────────────────────────────

📈 Statistics
   Prompts: 15
   Projects: my-app, backend-api
   Score: 6.5/10

⚠️  Patterns (3)

🔴 Missing Context (60%)
   • "Fix the bug in auth"
   • "Update the component"
   💡 Include specific error messages, framework versions, and file paths

   Before:
   ❌ "Fix the bug in auth"
   After:
   ✅ "Fix authentication bug in src/auth/login.ts where users get
      'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5."

🟡 Vague Instructions (40%)
   • "Make it better"
   • "Improve this"
   💡 Define specific success criteria and expected outcomes

   Before:
   ❌ "Make it better"
   After:
   ✅ "Optimize the database query to reduce response time from 500ms
      to under 100ms. Focus on adding proper indexes."

──────────────────────────────────────────────────
💎 Top Suggestion
   "Add error messages and stack traces to debugging requests for
    10x faster resolution."
──────────────────────────────────────────────────

MCP Integration

Hyntx can run as a Model Context Protocol (MCP) server, enabling real-time prompt analysis directly within MCP-compatible clients like Claude Code.

Quick Setup

Add hyntx to your Claude Code MCP configuration:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json Linux: ~/.config/Claude/claude_desktop_config.json

{
  "mcpServers": {
    "hyntx": {
      "command": "hyntx",
      "args": ["--mcp-server"]
    }
  }
}

After editing, restart Claude Code. The hyntx tools will be available in your conversations.

Prerequisites

  • Hyntx installed globally: npm install -g hyntx
  • AI provider configured: Set up Ollama (recommended) or cloud providers via environment variables

If using Ollama (recommended for privacy):

# Ensure Ollama is running
ollama serve

# Pull a model if needed
ollama pull llama3.2

# Set environment variables (add to ~/.zshrc or ~/.bashrc)
export HYNTX_SERVICES=ollama
export HYNTX_OLLAMA_MODEL=llama3.2

Available MCP Tools

Hyntx exposes three tools through the MCP interface:

analyze-prompt

Analyze a prompt to detect anti-patterns, issues, and get improvement suggestions.

Input Schema:

| Parameter | Type | Required | Description | | --------- | ------ | -------- | ----------------------------------------------------- | | prompt | string | Yes | The prompt text to analyze | | date | string | No | Date context in ISO format. Defaults to current date. |

Example Output:

{
  "patterns": [
    {
      "id": "no-context",
      "name": "Missing Context",
      "severity": "high",
      "frequency": "100%",
      "suggestion": "Include specific error messages and file paths",
      "examples": ["Fix the bug in auth"]
    }
  ],
  "stats": {
    "promptCount": 1,
    "overallScore": 4.5
  },
  "topSuggestion": "Add error messages and stack traces for faster resolution"
}

suggest-improvements

Get concrete before/after rewrites showing how to improve a prompt.

Input Schema:

| Parameter | Type | Required | Description | | --------- | ------ | -------- | ----------------------------------------------------- | | prompt | string | Yes | The prompt text to analyze for improvements | | date | string | No | Date context in ISO format. Defaults to current date. |

Example Output:

{
  "improvements": [
    {
      "issue": "Missing Context",
      "before": "Fix the bug in auth",
      "after": "Fix authentication bug in src/auth/login.ts where users get 'Invalid token' error. Using Next.js 14.1.0 with next-auth 4.24.5.",
      "suggestion": "Include specific error messages, framework versions, and file paths"
    }
  ],
  "summary": "Found 1 improvement(s)",
  "topSuggestion": "Add error messages and stack traces for faster resolution"
}

check-context

Verify if a prompt has sufficient context for effective AI interaction.

Input Schema:

| Parameter | Type | Required | Description | | --------- | ------ | -------- | ----------------------------------------------------- | | prompt | string | Yes | The prompt text to check for context | | date | string | No | Date context in ISO format. Defaults to current date. |

Example Output:

{
  "hasSufficientContext": false,
  "score": 4.5,
  "issues": ["Missing Context", "Vague Instructions"],
  "suggestion": "Include specific error messages and file paths",
  "details": "Prompt lacks sufficient context for effective AI interaction"
}

Usage Examples

Once configured, you can use these tools in your Claude Code conversations:

Analyze a prompt before sending:

Use the analyze-prompt tool to check: "Fix the login bug"

Get improvement suggestions:

Use suggest-improvements on: "Make the API faster"

Check if your prompt has enough context:

Use check-context to verify: "Update the component to handle errors"

MCP Server Troubleshooting

"Server failed to start"

  1. Verify hyntx is installed globally:

    which hyntx
    # Should output: /usr/local/bin/hyntx or similar
  2. Test manual startup:

    hyntx --mcp-server
    # Should output: MCP server running on stdio
  3. Check environment variables are set (if using cloud providers):

    echo $HYNTX_SERVICES
    echo $HYNTX_ANTHROPIC_KEY  # if using Anthropic

"Analysis failed: Provider not available"

  1. If using Ollama, ensure it's running:

    ollama list
    # If no output, start Ollama:
    ollama serve
  2. If using cloud providers, verify API keys are set:

    # Check if keys are configured
    env | grep HYNTX_

"Tools not appearing in Claude Code"

  1. Restart Claude Code completely after config changes

  2. Verify the config file path is correct for your OS

  3. Check JSON syntax in the config file:

    # macOS
    cat ~/Library/Application\ Support/Claude/claude_desktop_config.json | jq .

"Slow responses"

  • Local Ollama models are fastest but require GPU for best performance
  • Consider using a faster model: export HYNTX_OLLAMA_MODEL=llama3.2:1b
  • Cloud providers (Anthropic, Google) offer faster responses but require API keys

Privacy & Security

Hyntx takes your privacy seriously:

  • Local-first: Defaults to Ollama for offline analysis
  • Automatic redaction: Removes API keys, credentials, emails, tokens before analysis
  • Read-only: Never modifies your Claude Code logs
  • No telemetry: Hyntx doesn't send usage data anywhere

What Gets Redacted?

  • OpenAI/Anthropic API keys (sk-*, claude-*)
  • AWS credentials (AKIA*, secret keys)
  • Bearer tokens
  • HTTP credentials in URLs
  • Email addresses
  • Private keys (PEM format)

How It Works

  1. Read logs: Parses Claude Code conversation logs from ~/.claude/projects/
  2. Extract prompts: Filters user messages from conversations
  3. Sanitize: Redacts sensitive information automatically
  4. Analyze: Sends sanitized prompts to AI provider for pattern detection
  5. Report: Displays findings with examples and suggestions

Requirements

  • Node.js: 22.0.0 or higher
  • Claude Code: Must have Claude Code installed and at least one conversation
  • AI Provider: At least one of the following:
    • Ollama (recommended for privacy and cost savings)
    • Anthropic Claude API key
    • Google Gemini API key

Ollama Model Requirements

For local analysis with Ollama, you need to have a compatible model installed. See docs/MINIMUM_VIABLE_MODEL.md for detailed recommendations and performance benchmarks.

Quick picks:

| Use Case | Model | Parameters | Disk Size | Speed (CPU) | Quality | | ------------------- | ------------- | ---------- | --------- | -------------- | --------- | | Daily use | llama3.2 | 2-3B | ~2GB | ~2-5s/prompt | Good | | Production | mistral:7b | 7B | ~4GB | ~5-10s/prompt | Better | | Maximum quality | qwen2.5:14b | 14B | ~9GB | ~15-30s/prompt | Excellent |

Installation:

# Install recommended model (llama3.2)
ollama pull llama3.2

# Or choose a different model
ollama pull mistral:7b
ollama pull qwen2.5:14b

For complete model comparison, compatibility info, and performance notes, see the Model Requirements documentation.

Troubleshooting

"No Claude Code logs found"

Make sure you've used Claude Code at least once. Logs are stored in:

~/.claude/projects/<project-hash>/logs.jsonl

"Ollama connection failed"

  1. Check Ollama is running: ollama list
  2. Start Ollama: ollama serve
  3. Verify the host: echo $HYNTX_OLLAMA_HOST (default: http://localhost:11434)

"No prompts found for date range"

  • Check the date format: YYYY-MM-DD
  • Verify you used Claude Code on those dates
  • Try --dry-run to see what logs are being read

Programmatic API

Hyntx can also be used as a library in your Node.js applications for custom integrations, CI/CD pipelines, or building tooling on top of the analysis engine.

Installation

npm install hyntx
# or
pnpm add hyntx

Basic Usage

import {
  analyzePrompts,
  sanitizePrompts,
  readLogs,
  createProvider,
  getEnvConfig,
  type AnalysisResult,
  type ExtractedPrompt,
} from 'hyntx';

// Read Claude Code logs for a specific date
const { prompts } = await readLogs({ date: 'today' });

// Sanitize prompts to remove secrets
const { prompts: sanitizedTexts } = sanitizePrompts(
  prompts.map((p: ExtractedPrompt) => p.content),
);

// Get environment configuration
const config = getEnvConfig();

// Create an AI provider
const provider = await createProvider('ollama', config);

// Analyze the prompts
const result: AnalysisResult = await analyzePrompts({
  provider,
  prompts: sanitizedTexts,
  date: '2025-12-26',
});

// Use the results
console.log(`Overall score: ${result.stats.overallScore}/10`);
console.log(`Patterns detected: ${result.patterns.length}`);

result.patterns.forEach((pattern) => {
  console.log(`- ${pattern.name}: ${pattern.severity}`);
  console.log(`  Suggestion: ${pattern.suggestion}`);
});

Advanced Examples

CI/CD Integration - Fail builds when prompt quality drops below threshold:

import { analyzePrompts, readLogs, createProvider, getEnvConfig } from 'hyntx';

const config = getEnvConfig();
const provider = await createProvider('ollama', config);
const { prompts } = await readLogs({ date: 'today' });

const result = await analyzePrompts({
  provider,
  prompts: prompts.map((p) => p.content),
  date: new Date().toISOString().split('T')[0],
});

// Fail CI if quality score is too low
const QUALITY_THRESHOLD = 7.0;
if (result.stats.overallScore < QUALITY_THRESHOLD) {
  console.error(
    `Quality score ${result.stats.overallScore} below threshold ${QUALITY_THRESHOLD}`,
  );
  process.exit(1);
}

Custom Analysis - Analyze specific prompts without reading logs:

import { analyzePrompts, createProvider, getEnvConfig } from 'hyntx';

const config = getEnvConfig();
const provider = await createProvider('anthropic', config);

const customPrompts = [
  'Fix the bug',
  'Make it better',
  'Refactor the authentication module to use JWT tokens instead of sessions',
];

const result = await analyzePrompts({
  provider,
  prompts: customPrompts,
  date: '2025-12-26',
  context: {
    role: 'developer',
    techStack: ['TypeScript', 'React', 'Node.js'],
  },
});

console.log(result.patterns);

History Management - Track analysis over time:

import {
  analyzePrompts,
  saveAnalysisResult,
  loadAnalysisResult,
  compareResults,
  type HistoryMetadata,
} from 'hyntx';

// Run analysis
const result = await analyzePrompts({
  /* ... */
});

// Save to history
const metadata: HistoryMetadata = {
  date: '2025-12-26',
  promptCount: result.stats.promptCount,
  score: result.stats.overallScore,
  projectFilter: undefined,
  provider: 'ollama',
};
await saveAnalysisResult(result, metadata);

// Load previous analysis
const previousResult = await loadAnalysisResult('2025-12-19');

// Compare results
const comparison = await compareResults('2025-12-19', '2025-12-26');
console.log(
  `Score change: ${comparison.scoreChange > 0 ? '+' : ''}${comparison.scoreChange}`,
);

API Reference

Core Functions

  • analyzePrompts(options: AnalysisOptions): Promise<AnalysisResult> - Analyze prompts and detect anti-patterns
  • readLogs(options?: ReadLogsOptions): Promise<LogReadResult> - Read Claude Code conversation logs
  • sanitize(text: string): SanitizeResult - Remove secrets from a single text
  • sanitizePrompts(prompts: string[]): { prompts: string[]; totalRedacted: number } - Remove secrets from multiple prompts

Provider Functions

  • createProvider(type: ProviderType, config: EnvConfig): Promise<AnalysisProvider> - Create an AI provider instance
  • getAvailableProvider(config: EnvConfig, onFallback?: Function): Promise<AnalysisProvider> - Get first available provider with fallback
  • getAllProviders(services: string[], config: EnvConfig): AnalysisProvider[] - Get all configured providers

History Functions

  • saveAnalysisResult(result: AnalysisResult, metadata: HistoryMetadata): Promise<void> - Save analysis to history
  • loadAnalysisResult(date: string): Promise<HistoryEntry | null> - Load analysis from history
  • listAvailableDates(): Promise<string[]> - Get list of dates with saved analyses
  • compareResults(beforeDate: string, afterDate: string): Promise<ComparisonResult> - Compare two analyses

Utility Functions

  • getEnvConfig(): EnvConfig - Get environment configuration
  • claudeProjectsExist(): boolean - Check if Claude projects directory exists
  • parseDate(dateStr: string): Date - Parse date string to Date object
  • groupByDay(prompts: ExtractedPrompt[]): DayGroup[] - Group prompts by day

Cache Functions

  • generateCacheKey(config: CacheKeyConfig): string - Generate cache key for analysis
  • getCachedResult(cacheKey: string): Promise<AnalysisResult | null> - Get cached result
  • setCachedResult(cacheKey: string, result: AnalysisResult, ttlMinutes?: number): Promise<void> - Cache analysis result

TypeScript Support

Hyntx is written in TypeScript and provides full type definitions. All types are exported:

import type {
  AnalysisResult,
  AnalysisPattern,
  AnalysisStats,
  ExtractedPrompt,
  ProviderType,
  EnvConfig,
  HistoryEntry,
  ComparisonResult,
} from 'hyntx';

See the TypeScript definitions for complete API documentation.

Development

Setup

# Clone the repository
git clone https://github.com/jmlweb/hyntx.git
cd hyntx

# Install dependencies
pnpm install

# Run in development mode
pnpm dev

# Build
pnpm build

# Test the CLI
pnpm start

Project Structure

hyntx/
├── src/
│   ├── index.ts              # Library entry point (re-exports api/)
│   ├── cli.ts                # CLI entry point
│   ├── api/
│   │   └── index.ts          # Public API surface
│   ├── core/                 # Core business logic
│   │   ├── setup.ts         # Interactive setup (multi-provider)
│   │   ├── reminder.ts      # Reminder system
│   │   ├── log-reader.ts    # Log parsing
│   │   ├── schema-validator.ts # Log schema validation
│   │   ├── sanitizer.ts     # Secret redaction
│   │   ├── analyzer.ts      # Analysis orchestration + batching
│   │   ├── reporter.ts      # Output formatting (Before/After)
│   │   ├── watcher.ts       # Real-time log file monitoring
│   │   └── history.ts       # Analysis history management
│   ├── providers/            # AI providers
│   │   ├── base.ts          # Interface & prompts
│   │   ├── ollama.ts        # Ollama integration
│   │   ├── anthropic.ts     # Claude integration
│   │   ├── google.ts        # Gemini integration
│   │   └── index.ts         # Provider factory with fallback
│   ├── utils/               # Utility functions
│   │   ├── env.ts           # Environment config
│   │   ├── shell-config.ts  # Shell auto-configuration
│   │   ├── paths.ts         # System path constants
│   │   ├── logger-base.ts   # Base logger (no CLI deps)
│   │   ├── logger.ts        # CLI logger (with chalk)
│   │   └── terminal.ts      # Terminal utilities
│   └── types/
│       └── index.ts         # TypeScript type definitions
├── docs/
│   └── SPECS.md             # Technical specifications
└── package.json

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes using Conventional Commits
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Roadmap

For detailed development roadmap, planned features, and implementation status, see GitHub Issues and GitHub Projects.

License

MIT License - see LICENSE file for details.

Acknowledgments

  • Built for Claude Code users
  • Inspired by retrospective practices in Agile development
  • Privacy-first approach inspired by local-first software movement

Support


Made with ❤️ for better prompt engineering