@nikhilayeturi23/rltool
v1.1.0
Published
Production-ready React hook for RL-based optimization - works with any backend API (Cloudflare Workers, Next.js, Express)
Downloads
594
Maintainers
Readme
@nikhilayeturi23/rltool
A ready-to-use React hook for AI-powered text optimization using reinforcement learning. No backend setup required!
What is this?
This package provides a React hook (useRLTool) that instantly adds Q-learning based optimization to your React app. Perfect for:
- 📧 Email tone conversion - Casual → Professional
- 📝 Content generation - Marketing copy, blog posts
- 🔧 Data normalization - Clean and standardize data
- 💬 Customer support - Response templates
- 📊 Text transformation - Any text optimization task
Zero backend setup required - uses our hosted RL optimization API powered by Cloudflare Workers and OpenAI
📚 Documentation
- README.md (this file) - Complete API reference
- CHANGELOG.md - Version history
For more examples, visit the GitHub repository.
Installation
npm install @nikhilayeturi23/rltoolQuick Start - 2 Minutes!
1. Install the package
npm install @nikhilayeturi23/rltool2. Use in your React component
import { useRLTool } from '@nikhilayeturi23/rltool';
function EmailOptimizer() {
const { optimize, loading, result } = useRLTool();
const handleOptimize = async () => {
await optimize({
userQuery: "hey send me that file yo",
objective: {
goal: "Make professional and polite",
constraints: {
mustInclude: ["please", "thank you"],
mustAvoid: ["yo", "hey", "slang"],
tone: "professional"
}
},
useCase: "email"
});
};
return (
<div>
<button onClick={handleOptimize} disabled={loading}>
{loading ? "Optimizing..." : "Make Professional"}
</button>
{result && (
<div>
<p><strong>Original:</strong> hey send me that file yo</p>
<p><strong>Optimized:</strong> {result.optimizedOutput}</p>
<small>Iterations: {result.iterations} | Reward: {result.finalReward}</small>
</div>
)}
</div>
);
}Real-World Examples
API Reference
Hook: useRLTool(options)
Real-World Examples
Example 1: Email Tone Optimizer
import { useRLTool } from '@nikhilayeturi23/rltool';
function EmailOptimizer() {
const [input, setInput] = useState("");
// No apiEndpoint needed!
const { optimize, loading, result } = useRLTool();
const makeProf = () => optimize({
userQuery: input,
objective: {
goal: "Make it more optimized",
constraints: {
mustInclude: ["greeting", "signature"],
mustAvoid: ["slang", "abbreviations"],
tone: "professional"
}
},
useCase: "text"
});
return (
<div>
<textarea value={input} onChange={(e) => setInput(e.target.value)} />
<button onClick={makeProf} disabled={loading}>
Make Professional
</button>
{result && <p>{result.optimizedOutput}</p>}
</div>
);
}Example 2: Data Normalization
const { optimize } = useRLTool(); // Uses hosted API
const normalizeData = async (rawData: string) => {
await optimize({
userQuery: rawData,
objective: {
intent: "Normalize user data for consistency",
domain: "data",
constraints: {
mustInclude: ["consistent casing"],
mustAvoid: ["duplicate entries", "mixed types"],
tone: "technical"
}
},
useCase: "data"
});
};Use Cases
This hook works for any text optimization task:
✅ Email Optimization: Casual → Professional, Angry → Polite
✅ Content Generation: Blog posts, product descriptions, social media
✅ Data Normalization: Clean inconsistent data, format standardization
✅ Code Documentation: Generate comments, API docs
✅ Marketing Copy: Headlines, CTAs, landing pages
✅ Customer Support: Response templates, help articles
✅ Translation & Localization: Adapt content for audiences
TypeScript Types
All types are exported from the main package:
import {
RLOptimizationResult,
RLProgressLog,
RLObjective,
OptimizationInput
} from '@nikhilayeturi23/rltool';Use Cases
This package works for any optimization task:
- ✅ Text Optimization: Email tone, marketing copy, code comments
- ✅ Data Normalization: Clean inconsistent data, format standardization
- ✅ Content Generation: Blog posts, product descriptions, social media
- ✅ Code Optimization: Refactoring suggestions, performance improvements
- ✅ Query Optimization: SQL, GraphQL, search queries
- ✅ Configuration Tuning: Finding optimal settings/parameters
- ✅ Creative Writing: Story generation, dialogue improvement
- ✅ Translation: Language translation with style constraints
Q-Learning Behind the Scenes
The hosted API uses these Q-learning parameters:
- Learning rate (α): 0.1 - How much new information overrides old
- Discount factor (γ): 0.95 - Importance of future rewards
- Epsilon: 0.2 (decays to 0.01) - Exploration vs exploitation
- Max iterations: 6 - Maximum optimization attempts
- Convergence threshold: 70 - Reward score to stop early
- Actions: refine_clarity, adjust_tone, add_details, simplify, restructure
Contributing
Found a bug or want to contribute? Open an issue or PR on GitHub.
