npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

otherone-agent

v0.2.0

Published

A lightweight, extensible AI Agent framework built with Node.js and TypeScript

Readme

npm version license

English | 简体中文

This product is dedicated to my best her! She loves sunflowers 🌻

🎯 Vision

otherone-agent is not just another AI framework. It's a paradigm shift in how developers build intelligent agents.

We believe AI agent development should be:

  • Simple - 8 lines to production
  • Powerful - Enterprise-grade features out of the box
  • Extensible - Plugin architecture for unlimited possibilities
  • Efficient - Intelligent context management saves 80% token costs

The Problem

Current AI frameworks force you to choose between simplicity and power. You either get a toy example that doesn't scale, or a complex enterprise solution that takes weeks to understand.

The Solution

otherone-agent gives you both. Start with 8 lines of code, scale to millions of users.

📦 Installation

npm install otherone-agent

🚀 Quick Start

💡 AI Quick Development Tip: You can send this prompt to AI for rapid development:

"Read this link: https://github.com/wuyoujae/otherone-agent, please help me quickly develop a conversational agent with webui using otherone-agent"

Basic Usage

import { veloca } from 'otherone-agent';

// Create a new conversation
const sessionId = veloca.CreateNewSession();

// First turn
await veloca.InvokeAgent(
    { sessionId, contextLoadType: 'localfile', contextWindow: 128000 },
    {
        provider: 'openai',
        apiKey: process.env.OPENAI_API_KEY,
        baseUrl: 'https://api.openai.com/v1',
        model: 'gpt-4o-mini',
        userPrompt: 'What is 2+2?',
        stream: true
    }
);

// Second turn - automatically loads history
const response = await veloca.InvokeAgent(
    { sessionId, contextLoadType: 'localfile', contextWindow: 128000 },
    {
        provider: 'openai',
        apiKey: process.env.OPENAI_API_KEY,
        baseUrl: 'https://api.openai.com/v1',
        model: 'gpt-4o-mini',
        userPrompt: 'Multiply that by 3',
        stream: true
    }
);

console.log(response.content); // "12"

Usage Example

With Tools

const tools = [{
    type: 'function',
    function: {
        name: 'get_weather',
        description: 'Get current weather',
        parameters: {
            type: 'object',
            properties: {
                location: { type: 'string' }
            }
        }
    }
}];

const tools_realize = {
    get_weather: async (location: string) => {
        return `Weather in ${location}: Sunny, 72°F`;
    }
};

const response = await veloca.InvokeAgent(
    { sessionId, contextLoadType: 'localfile', contextWindow: 128000 },
    {
        provider: 'openai',
        apiKey: process.env.OPENAI_API_KEY,
        baseUrl: 'https://api.openai.com/v1',
        model: 'gpt-4o-mini',
        userPrompt: 'What is the weather in San Francisco?',
        tools,
        tools_realize,
        stream: true
    }
);

That's it. You now have:

  • ✅ Multi-turn conversation memory
  • ✅ Automatic context management
  • ✅ Streaming responses
  • ✅ Tool calling support
  • ✅ Intelligent context compression
  • ✅ Production-ready persistence

📚 Advanced Features

Context Compression

Veloca automatically compresses conversation history when approaching token limits:

const response = await veloca.InvokeAgent(
    {
        sessionId,
        contextLoadType: 'localfile',
        contextWindow: 128000,
        thresholdPercentage: 0.8  // Compress at 80% capacity
    },
    {
        provider: 'openai',
        apiKey: process.env.OPENAI_API_KEY,
        baseUrl: 'https://api.openai.com/v1',
        model: 'gpt-4o-mini',
        userPrompt: 'Continue our conversation...',
        // Compression LLM config (optional)
        compact_llm_model: 'gpt-4o-mini',
        compact_llm_temperature: 0.3,
        stream: true
    }
);

Custom Storage

// Read session data
const sessionData = veloca.ReadSessionData(sessionId);

// Get all sessions
const allSessions = veloca.GetAllSessions();

// Manual entry writing
veloca.WriteEntry({
    storageType: 'localfile',
    sessionId,
    role: 'user',
    content: 'Custom message'
});

🔥 Core Features

🧠 Smart Context Management

  • Automatic Compression: Summarizes conversation history when approaching token limits
  • Token Estimation: Built-in token counting to help you stay within limits
  • Configurable Thresholds: Set when compression should trigger (default 80%)

🔄 Multi-Provider Ready

  • OpenAI: Full support with streaming
  • Anthropic: Coming soon
  • Custom APIs: Extensible architecture for your own LLM

🛠️ Simple Tool Calling

  • Easy Definition: Define your tools, we handle the execution loop
  • Type Safe: Full TypeScript support for better DX
  • Error Handling: Built-in retry and error management

💾 Zero-Config Storage

  • Local File: JSON-based storage, no setup required
  • Session Management: UUID-based conversation tracking
  • History Tracking: Complete audit trail of interactions

🏗️ Why otherone-agent?

Lightweight: No heavy dependencies, just the essentials you need.

Developer-Friendly: Sensible defaults mean you can start with minimal configuration.

Modular: Use only what you need - token estimation, context management, or the full agent loop.

Transparent: Simple, readable code. No magic, no surprises.

✨ Features

  • 🚀 Support for streaming and non-streaming responses
  • 🔧 Automatic tool loop processing
  • 💾 Flexible context management and compression
  • 📦 Modular design, easy to extend
  • 🔌 Support for multiple AI providers (OpenAI, Anthropic, Fetch)

🎯 Roadmap

✅ Completed

  • Core agent loop
  • OpenAI integration
  • Context management
  • Tool calling
  • Local file storage
  • Streaming support

🚧 In Progress

  • MCP server integration
  • Skills system
  • Web UI

📋 Planned

  • More provider support (Anthropic, Claude, etc.)
  • Database storage adapter
  • Advanced caching strategies
  • Plugin marketplace
  • ...and more!

📄 License

MIT