@axon-ai/cli
v1.0.3
Published
CLI tool for Axon - Monitor LangChain agents in real-time
Readme
Agent Trace CLI
A command-line tool for monitoring LangChain agents in real-time with the AXON dashboard.
Installation
Global Installation (Recommended)
npm install -g @axon-ai/cliLocal Installation
npm install @axon-ai/cli
npx axon-ai --helpQuick Start
Initialize Axon in your project:
axon-ai init --project my-ai-projectStart the dashboard:
axon-ai startAdd tracing to your LangChain agents:
import { createTracer } from '@axon-ai/langchain-tracer'; const tracer = createTracer({ projectName: 'my-ai-project' }); const model = new ChatOpenAI({ modelName: 'gpt-3.5-turbo', callbacks: [tracer] // Add the tracer });Run your agents and watch them in real-time!
Commands
axon-ai init
Initialize AXON in your current project.
axon-ai init [options]Options:
--project <name>- Project name (default: "default")--auto-start- Automatically start dashboard after initialization
Example:
axon-ai init --project my-ai-app --auto-startaxon-ai start
Start the AXON dashboard and enable tracing.
axon-ai start [options]Options:
-p, --port <port>- Backend server port (default: 3000)-d, --dashboard-port <port>- Dashboard port (default: 5173)--no-open- Don't automatically open dashboard in browser--project <name>- Project name for organizing traces
Example:
axon-ai start --port 3001 --dashboard-port 5174agent-trace status
Check the status of AXON services.
axon-ai statusShows:
- Project information
- Backend server status
- Dashboard status
- Quick action suggestions
axon-ai stop
Stop all AXON services.
axon-ai stopaxon-ai version
Show version information.
axon-ai versionIntegration with LangChain
Basic Integration
import { createTracer } from '@axon-ai/langchain-tracer';
import { ChatOpenAI } from '@langchain/openai';
// Create tracer
const tracer = createTracer({
projectName: 'my-project',
endpoint: 'http://localhost:3000'
});
// Add to your model
const model = new ChatOpenAI({
modelName: 'gpt-3.5-turbo',
callbacks: [tracer]
});Agent Integration
import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents';
const agent = await createOpenAIFunctionsAgent({
llm: model,
tools: [searchTool, calculatorTool],
prompt: agentPrompt
});
const agentExecutor = new AgentExecutor({
agent,
tools: [searchTool, calculatorTool],
callbacks: [tracer] // Add tracer to executor too
});Chain Integration
import { LLMChain } from 'langchain/chains';
const chain = new LLMChain({
llm: model,
prompt: myPrompt,
callbacks: [tracer]
});Configuration
After running axon-ai init, a .axon-ai/config.json file is created:
{
"project": "my-project",
"version": "1.0.0",
"initialized": "2024-01-15T10:30:00.000Z",
"backend": {
"port": 3000,
"host": "localhost"
},
"dashboard": {
"port": 5173,
"host": "localhost"
}
}Troubleshooting
Port Already in Use
If you get a "port already in use" error:
# Check what's using the port
lsof -i :3000
# Kill the process
kill -9 <PID>
# Or use different ports
axon-ai start --port 3001 --dashboard-port 5174Services Not Starting
Check if ports are available:
axon-ai statusStop all services and restart:
axon-ai stop axon-ai startCheck logs in the terminal where you started the services
Dashboard Not Opening
If the dashboard doesn't open automatically:
- Check the status:
axon-ai status - Manually open:
http://localhost:5173(or your configured port) - Make sure the backend is running on the correct port
Development
Building from Source
git clone https://github.com/yourusername/langchain-tracer/Axon.git
cd axon-ai
npm install
npm run build:cliRunning in Development
cd packages/cli
npm run devLicense
MIT License - see LICENSE for details.
