npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2025 – Pkg Stats / Ryan Hefner

@nikhilayeturi23/rltool

v1.1.0

Published

Production-ready React hook for RL-based optimization - works with any backend API (Cloudflare Workers, Next.js, Express)

Downloads

594

Readme

@nikhilayeturi23/rltool

A ready-to-use React hook for AI-powered text optimization using reinforcement learning. No backend setup required!

What is this?

This package provides a React hook (useRLTool) that instantly adds Q-learning based optimization to your React app. Perfect for:

  • 📧 Email tone conversion - Casual → Professional
  • 📝 Content generation - Marketing copy, blog posts
  • 🔧 Data normalization - Clean and standardize data
  • 💬 Customer support - Response templates
  • 📊 Text transformation - Any text optimization task

Zero backend setup required - uses our hosted RL optimization API powered by Cloudflare Workers and OpenAI

📚 Documentation

  • README.md (this file) - Complete API reference
  • CHANGELOG.md - Version history

For more examples, visit the GitHub repository.

Installation

npm install @nikhilayeturi23/rltool

Quick Start - 2 Minutes!

1. Install the package

npm install @nikhilayeturi23/rltool

2. Use in your React component

import { useRLTool } from '@nikhilayeturi23/rltool';

function EmailOptimizer() {
  const { optimize, loading, result } = useRLTool();

  const handleOptimize = async () => {
    await optimize({
      userQuery: "hey send me that file yo",
      objective: {
        goal: "Make professional and polite",
        constraints: {
          mustInclude: ["please", "thank you"],
          mustAvoid: ["yo", "hey", "slang"],
          tone: "professional"
        }
      },
      useCase: "email"
    });
  };

  return (
    <div>
      <button onClick={handleOptimize} disabled={loading}>
        {loading ? "Optimizing..." : "Make Professional"}
      </button>

      {result && (
        <div>
          <p><strong>Original:</strong> hey send me that file yo</p>
          <p><strong>Optimized:</strong> {result.optimizedOutput}</p>
          <small>Iterations: {result.iterations} | Reward: {result.finalReward}</small>
        </div>
      )}
    </div>
  );
}

Real-World Examples

API Reference

Hook: useRLTool(options)

Real-World Examples

Example 1: Email Tone Optimizer

import { useRLTool } from '@nikhilayeturi23/rltool';

function EmailOptimizer() {
  const [input, setInput] = useState("");
  
  // No apiEndpoint needed!
  const { optimize, loading, result } = useRLTool();

  const makeProf = () => optimize({
    userQuery: input,
    objective: {
      goal: "Make it more optimized",
      constraints: {
        mustInclude: ["greeting", "signature"],
        mustAvoid: ["slang", "abbreviations"],
        tone: "professional"
      }
    },
    useCase: "text"
  });

  return (
    <div>
      <textarea value={input} onChange={(e) => setInput(e.target.value)} />
      <button onClick={makeProf} disabled={loading}>
        Make Professional
      </button>
      {result && <p>{result.optimizedOutput}</p>}
    </div>
  );
}

Example 2: Data Normalization

const { optimize } = useRLTool(); // Uses hosted API

const normalizeData = async (rawData: string) => {
  await optimize({
    userQuery: rawData,
    objective: {
      intent: "Normalize user data for consistency",
      domain: "data",
      constraints: {
        mustInclude: ["consistent casing"],
        mustAvoid: ["duplicate entries", "mixed types"],
        tone: "technical"
      }
    },
    useCase: "data"
  });
};

Use Cases

This hook works for any text optimization task:

  • Email Optimization: Casual → Professional, Angry → Polite

  • Content Generation: Blog posts, product descriptions, social media

  • Data Normalization: Clean inconsistent data, format standardization

  • Code Documentation: Generate comments, API docs

  • Marketing Copy: Headlines, CTAs, landing pages

  • Customer Support: Response templates, help articles

  • Translation & Localization: Adapt content for audiences

    TypeScript Types

All types are exported from the main package:

import { 
  RLOptimizationResult, 
  RLProgressLog, 
  RLObjective,
  OptimizationInput 
} from '@nikhilayeturi23/rltool';

Use Cases

This package works for any optimization task:

  • Text Optimization: Email tone, marketing copy, code comments
  • Data Normalization: Clean inconsistent data, format standardization
  • Content Generation: Blog posts, product descriptions, social media
  • Code Optimization: Refactoring suggestions, performance improvements
  • Query Optimization: SQL, GraphQL, search queries
  • Configuration Tuning: Finding optimal settings/parameters
  • Creative Writing: Story generation, dialogue improvement
  • Translation: Language translation with style constraints

Q-Learning Behind the Scenes

The hosted API uses these Q-learning parameters:

  • Learning rate (α): 0.1 - How much new information overrides old
  • Discount factor (γ): 0.95 - Importance of future rewards
  • Epsilon: 0.2 (decays to 0.01) - Exploration vs exploitation
  • Max iterations: 6 - Maximum optimization attempts
  • Convergence threshold: 70 - Reward score to stop early
  • Actions: refine_clarity, adjust_tone, add_details, simplify, restructure

Contributing

Found a bug or want to contribute? Open an issue or PR on GitHub.

Links