npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@axon-ai/cli

v1.0.3

Published

CLI tool for Axon - Monitor LangChain agents in real-time

Readme

Agent Trace CLI

A command-line tool for monitoring LangChain agents in real-time with the AXON dashboard.

Installation

Global Installation (Recommended)

npm install -g @axon-ai/cli

Local Installation

npm install @axon-ai/cli
npx axon-ai --help

Quick Start

  1. Initialize Axon in your project:

    axon-ai init --project my-ai-project
  2. Start the dashboard:

    axon-ai start
  3. Add tracing to your LangChain agents:

    import { createTracer } from '@axon-ai/langchain-tracer';
       
    const tracer = createTracer({
      projectName: 'my-ai-project'
    });
       
    const model = new ChatOpenAI({
      modelName: 'gpt-3.5-turbo',
      callbacks: [tracer] // Add the tracer
    });
  4. Run your agents and watch them in real-time!

Commands

axon-ai init

Initialize AXON in your current project.

axon-ai init [options]

Options:

  • --project <name> - Project name (default: "default")
  • --auto-start - Automatically start dashboard after initialization

Example:

axon-ai init --project my-ai-app --auto-start

axon-ai start

Start the AXON dashboard and enable tracing.

axon-ai start [options]

Options:

  • -p, --port <port> - Backend server port (default: 3000)
  • -d, --dashboard-port <port> - Dashboard port (default: 5173)
  • --no-open - Don't automatically open dashboard in browser
  • --project <name> - Project name for organizing traces

Example:

axon-ai start --port 3001 --dashboard-port 5174

agent-trace status

Check the status of AXON services.

axon-ai status

Shows:

  • Project information
  • Backend server status
  • Dashboard status
  • Quick action suggestions

axon-ai stop

Stop all AXON services.

axon-ai stop

axon-ai version

Show version information.

axon-ai version

Integration with LangChain

Basic Integration

import { createTracer } from '@axon-ai/langchain-tracer';
import { ChatOpenAI } from '@langchain/openai';

// Create tracer
const tracer = createTracer({
  projectName: 'my-project',
  endpoint: 'http://localhost:3000'
});

// Add to your model
const model = new ChatOpenAI({
  modelName: 'gpt-3.5-turbo',
  callbacks: [tracer]
});

Agent Integration

import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';

const agent = await createOpenAIFunctionsAgent({
  llm: model,
  tools: [searchTool, calculatorTool],
  prompt: agentPrompt
});

const agentExecutor = new AgentExecutor({
  agent,
  tools: [searchTool, calculatorTool],
  callbacks: [tracer] // Add tracer to executor too
});

Chain Integration

import { LLMChain } from 'langchain/chains';

const chain = new LLMChain({
  llm: model,
  prompt: myPrompt,
  callbacks: [tracer]
});

Configuration

After running axon-ai init, a .axon-ai/config.json file is created:

{
  "project": "my-project",
  "version": "1.0.0",
  "initialized": "2024-01-15T10:30:00.000Z",
  "backend": {
    "port": 3000,
    "host": "localhost"
  },
  "dashboard": {
    "port": 5173,
    "host": "localhost"
  }
}

Troubleshooting

Port Already in Use

If you get a "port already in use" error:

# Check what's using the port
lsof -i :3000

# Kill the process
kill -9 <PID>

# Or use different ports
axon-ai start --port 3001 --dashboard-port 5174

Services Not Starting

  1. Check if ports are available:

    axon-ai status
  2. Stop all services and restart:

    axon-ai stop
    axon-ai start
  3. Check logs in the terminal where you started the services

Dashboard Not Opening

If the dashboard doesn't open automatically:

  1. Check the status: axon-ai status
  2. Manually open: http://localhost:5173 (or your configured port)
  3. Make sure the backend is running on the correct port

Development

Building from Source

git clone https://github.com/yourusername/langchain-tracer/Axon.git
cd axon-ai
npm install
npm run build:cli

Running in Development

cd packages/cli
npm run dev

License

MIT License - see LICENSE for details.