ai-usage-metrics-mcp
v1.0.0
Published
MCP server for tracking AI usage metrics and structured logs
Readme
AI Usage Metrics MCP Server
Track AI usage metrics and structured logs across all your applications
Quick Start • One-Click Install • Documentation • Contributing
A Model Context Protocol (MCP) server for tracking AI usage metrics and structured logs across your applications. Monitor model calls, analyze usage patterns, track costs, and debug AI interactions with a clean, extensible architecture.
Table of Contents
- Overview
- Features
- Quick Start
- One-Click Install
- Manual Installation
- Usage
- API Reference
- Data Model
- Architecture
- Extending the Server
- Development
- Testing
- License
Overview
This MCP server provides a centralized way to track and analyze AI model usage across your applications. Whether you're building chatbots, RAG systems, or autonomous agents, this server helps you:
- Track every model call with full context (inputs, outputs, metadata)
- Analyze usage patterns across projects, environments, and users
- Monitor costs via token counting and aggregation
- Debug interactions by replaying sessions and conversations
- Ensure safety by logging safety check results
The server implements the Model Context Protocol specification, making it compatible with Claude, Cursor, Windsurf, and any MCP-enabled AI assistant or agent framework.
Features
| Feature | Description | |---------|-------------| | Comprehensive Logging | Log model calls with full message history, token counts, latency, and custom metrics | | Session Tracking | Group related calls into sessions for conversation replay and analysis | | Multi-Environment | Track usage across dev, staging, and production environments | | Flexible Filtering | Search and filter by project, environment, user, model, and date range | | Real-time Aggregation | Get instant metrics on call counts, token usage, and latency | | Safety Monitoring | Track safety check results (passed, flagged, blocked) | | RAG Support | Log retrieved context with source attribution | | Extensible Storage | Clean interface for swapping storage backends (in-memory, PostgreSQL, etc.) |
🚀 Quick Start
# Clone and install
git clone https://github.com/your-org/ai-usage-metrics-mcp.git
cd ai-usage-metrics-mcp
npm install && npm run buildThen add to your AI platform using the one-click configurations below.
📦 One-Click Install
Choose your AI platform and copy the configuration. Replace /path/to/ai-usage-metrics-mcp with your actual installation path.
Claude Desktop
Config file: ~/Library/Application Support/Claude/claude_desktop_config.json
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}One-liner setup:
# Create config directory if needed
mkdir -p ~/Library/Application\ Support/Claude
# Add MCP server (creates or updates config)
cat > ~/Library/Application\ Support/Claude/claude_desktop_config.json << 'EOF'
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
EOFConfig file: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["C:\\path\\to\\ai-usage-metrics-mcp\\dist\\index.js"]
}
}
}PowerShell one-liner:
# Create config directory if needed
New-Item -ItemType Directory -Force -Path "$env:APPDATA\Claude"
# Create config file
@'
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["C:\\ai-usage-metrics-mcp\\dist\\index.js"]
}
}
}
'@ | Out-File -FilePath "$env:APPDATA\Claude\claude_desktop_config.json" -Encoding UTF8Config file: ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}One-liner setup:
mkdir -p ~/.config/Claude
cat > ~/.config/Claude/claude_desktop_config.json << 'EOF'
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
EOFClaude Code CLI
Config file: ~/.claude/settings.json
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}One-liner setup:
mkdir -p ~/.claude
cat > ~/.claude/settings.json << 'EOF'
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
EOFOr use the Claude Code command:
claude mcp add ai-usage-metrics node /path/to/ai-usage-metrics-mcp/dist/index.jsCursor
Config file: ~/.cursor/mcp.json
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}One-liner setup (macOS/Linux):
mkdir -p ~/.cursor
cat > ~/.cursor/mcp.json << 'EOF'
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
EOFConfig file: .cursor/mcp.json (in your project root)
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}- Open Cursor Settings (
Cmd+,/Ctrl+,) - Search for "MCP"
- Click "Edit in settings.json"
- Add the server configuration:
{
"mcp.servers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}Windsurf
Config file: ~/.codeium/windsurf/mcp_config.json
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}One-liner setup (macOS/Linux):
mkdir -p ~/.codeium/windsurf
cat > ~/.codeium/windsurf/mcp_config.json << 'EOF'
{
"mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
EOF- Open Windsurf
- Navigate to Cascade → Settings (hammer icon)
- Click "Add Server" or "Configure"
- Add the server with:
- Name:
ai-usage-metrics - Command:
node - Args:
/path/to/ai-usage-metrics-mcp/dist/index.js
- Name:
VS Code + Continue
Config file: ~/.continue/config.json
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
]
}
}One-liner setup:
mkdir -p ~/.continue
# If config.json doesn't exist, create it
cat > ~/.continue/config.json << 'EOF'
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
]
}
}
EOFCreate .continue/config.json in your project root:
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "node",
"args": ["./node_modules/ai-usage-metrics-mcp/dist/index.js"]
}
}
]
}
}Cline
Config file: VS Code Settings (settings.json)
{
"cline.mcpServers": {
"ai-usage-metrics": {
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"],
"disabled": false
}
}
}- Open VS Code
- Click the Cline icon in the sidebar
- Click the MCP Servers icon (server stack)
- Click "Add MCP Server"
- Select "Local (stdio)"
- Enter the configuration:
- Name:
ai-usage-metrics - Command:
node /path/to/ai-usage-metrics-mcp/dist/index.js
- Name:
Zed
Config file: ~/.config/zed/settings.json
{
"context_servers": {
"ai-usage-metrics": {
"command": {
"path": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
}One-liner setup:
# Note: Merge with existing settings if you have other configurations
cat > ~/.config/zed/settings.json << 'EOF'
{
"context_servers": {
"ai-usage-metrics": {
"command": {
"path": "node",
"args": ["$HOME/ai-usage-metrics-mcp/dist/index.js"]
}
}
}
}
EOFOther MCP-Compatible Platforms
For any MCP-compatible platform, use these standard connection details:
| Setting | Value |
|---------|-------|
| Transport | stdio |
| Command | node |
| Arguments | /path/to/ai-usage-metrics-mcp/dist/index.js |
| Server Name | ai-usage-metrics |
Generic MCP Configuration:
{
"name": "ai-usage-metrics",
"transport": "stdio",
"command": "node",
"args": ["/path/to/ai-usage-metrics-mcp/dist/index.js"]
}Manual Installation
Prerequisites
- Node.js 18.0.0 or higher
- npm, pnpm, or yarn
From Source
# Clone the repository
git clone https://github.com/your-org/ai-usage-metrics-mcp.git
cd ai-usage-metrics-mcp
# Install dependencies
npm install
# Build the project
npm run build
# Verify installation
npm startPackage Manager Scripts
| Script | Description |
|--------|-------------|
| npm run build | Compile TypeScript to JavaScript |
| npm start | Run the compiled server |
| npm run dev | Watch mode for development |
| npm test | Run the test suite |
| npm run test:coverage | Run tests with coverage report |
Verifying Installation
After installation, verify the server works:
# Test the server starts correctly
echo '{"jsonrpc": "2.0", "id": 1, "method": "initialize", "params": {"capabilities": {}}}' | node dist/index.jsYou should see a JSON response with server capabilities.
Usage
Logging Model Calls
After each AI model invocation in your application, log the call:
// Using MCP client
await mcpClient.callTool("log_model_call", {
project: "my-chatbot",
environment: "prod",
sessionId: "session-abc-123",
modelName: "claude-3-opus",
inputMessages: [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "What is the capital of France?" }
],
outputMessages: [
{ role: "assistant", content: "The capital of France is Paris." }
],
tokensIn: 45,
tokensOut: 12,
latencyMs: 234
});Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"stored": true
}Searching Calls
Find specific calls with flexible filtering:
// Search by project and date range
const calls = await mcpClient.callTool("search_model_calls", {
project: "my-chatbot",
environment: "prod",
from: "2024-01-01T00:00:00Z",
to: "2024-01-31T23:59:59Z",
limit: 100
});
// Search by user
const userCalls = await mcpClient.callTool("search_model_calls", {
userId: "user-12345",
modelName: "gpt-4"
});Session Management
Track conversations by grouping calls into sessions:
// List all sessions for a project
const sessions = await mcpClient.callTool("list_sessions", {
project: "my-chatbot",
environment: "prod"
});
// Get all calls in a specific session
const sessionCalls = await mcpClient.callTool("get_session_calls", {
sessionId: "session-abc-123"
});Session Summary Response:
{
"sessionId": "session-abc-123",
"project": "my-chatbot",
"environment": "prod",
"firstCallAt": "2024-01-15T10:30:00Z",
"lastCallAt": "2024-01-15T10:45:00Z",
"callCount": 8,
"totalTokensIn": 1250,
"totalTokensOut": 890,
"avgLatencyMs": 245
}Aggregate Metrics
Get high-level usage statistics:
// Get metrics for a project
const metrics = await mcpClient.callTool("get_aggregate_metrics", {
project: "my-chatbot",
environment: "prod",
from: "2024-01-01T00:00:00Z",
to: "2024-01-31T23:59:59Z"
});Response:
{
"callCount": 15420,
"totalTokensIn": 2450000,
"totalTokensOut": 1890000,
"avgLatencyMs": 312
}📚 API Reference
Tools
log_model_call
Log a model call for tracking.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| project | string | Yes | Project identifier |
| environment | string | No | Environment (default: "dev") |
| userId | string | No | User identifier |
| sessionId | string | No | Session identifier for grouping calls |
| modelName | string | Yes | Model name (e.g., "gpt-4", "claude-3-opus") |
| modelVersion | string | No | Model version |
| promptType | string | No | Type: "chat", "rag", "tool", "agent" |
| inputMessages | array | Yes | Input messages [{role, content}] |
| outputMessages | array | Yes | Output messages [{role, content}] |
| retrievedContext | array | No | RAG context [{source, docId, hash?}] |
| latencyMs | number | No | Call latency in milliseconds |
| tokensIn | number | No | Input token count |
| tokensOut | number | No | Output token count |
| safety | object | No | Safety result {status, details?} |
| metrics | object | No | Custom metrics key-value pairs |
| traceId | string | No | Distributed tracing ID |
| requestId | string | No | Provider request ID |
Returns: { id: string, stored: boolean }
search_model_calls
Search logged calls with filters.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| project | string | No | Filter by project |
| environment | string | No | Filter by environment |
| userId | string | No | Filter by user |
| modelName | string | No | Filter by model |
| from | string | No | Start date (ISO format) |
| to | string | No | End date (ISO format) |
| limit | number | No | Max results (default: 50, max: 100) |
Returns: Array of ModelCallLog objects (message content truncated for safety)
list_sessions
List session summaries.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| project | string | No | Filter by project |
| environment | string | No | Filter by environment |
| limit | number | No | Max results (default: 50, max: 100) |
Returns: Array of SessionSummary objects
get_session_calls
Get all calls for a session.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| sessionId | string | Yes | Session identifier |
Returns: Array of ModelCallLog objects (chronological order)
get_aggregate_metrics
Get aggregate metrics across calls.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| project | string | No | Filter by project |
| environment | string | No | Filter by environment |
| from | string | No | Start date (ISO format) |
| to | string | No | End date (ISO format) |
Returns: { callCount, totalTokensIn, totalTokensOut, avgLatencyMs? }
Resources
Resources provide read-only access via URI patterns:
| URI Pattern | Description |
|-------------|-------------|
| ai-usage://calls/{id} | Get a specific call by ID |
| ai-usage://sessions/{sessionId} | Get session summary and all calls |
| ai-usage://metrics/aggregate?project=...&environment=... | Get aggregate metrics |
Example resource access:
// Get a specific call
const call = await mcpClient.readResource("ai-usage://calls/550e8400-e29b-41d4-a716-446655440000");
// Get session details
const session = await mcpClient.readResource("ai-usage://sessions/session-abc-123");
// Get filtered metrics
const metrics = await mcpClient.readResource(
"ai-usage://metrics/aggregate?project=my-chatbot&environment=prod"
);Data Model
ModelCallLog
interface ModelCallLog {
id: string; // Unique identifier (UUID)
timestamp: string; // ISO timestamp
project: string; // Project identifier
environment: string; // "dev" | "staging" | "prod" | custom
userId?: string; // Optional user identifier
sessionId?: string; // Optional session identifier
modelName: string; // Model name
modelVersion?: string; // Model version
promptType?: string; // "chat" | "rag" | "tool" | "agent" | custom
inputMessages: Message[]; // Input messages
outputMessages: OutputMessage[]; // Output messages
retrievedContext?: RetrievedContext[]; // RAG context
latencyMs?: number; // Latency in milliseconds
tokensIn?: number; // Input tokens
tokensOut?: number; // Output tokens
safety?: SafetyResult; // Safety check result
metrics?: Record<string, number | string>; // Custom metrics
traceId?: string; // Distributed tracing ID
requestId?: string; // Provider request ID
}SessionSummary
interface SessionSummary {
sessionId: string;
project: string;
environment: string;
firstCallAt: string; // ISO timestamp
lastCallAt: string; // ISO timestamp
callCount: number;
totalTokensIn: number;
totalTokensOut: number;
avgLatencyMs?: number;
}Message Types
interface Message {
role: "system" | "user" | "assistant" | "tool";
content: string;
}
interface OutputMessage {
role: "assistant" | "tool";
content: string;
}
interface RetrievedContext {
source: string;
docId: string;
hash?: string;
}
interface SafetyResult {
status: "passed" | "flagged" | "blocked";
details?: string;
}Architecture
┌─────────────────────────────────────────────────────────────┐
│ MCP Server │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ src/index.ts │ │
│ │ Server wiring & handlers │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌──────────────────┼──────────────────┐ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Tools │ │ Resources │ │ Schema │ │
│ │ src/tools/* │ │src/resources│ │ src/schema │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │ │ │
│ └──────────────────┼──────────────────┘ │
│ ▼ │
│ ┌─────────────────────────┐ │
│ │ MetricsStore │ │
│ │ Interface │ │
│ │ src/store.ts │ │
│ └─────────────────────────┘ │
│ │ │
│ ┌─────────────┴─────────────┐ │
│ ▼ ▼ │
│ ┌──────────────────┐ ┌──────────────────┐ │
│ │ InMemoryStore │ │ PostgresStore │ │
│ │ (included) │ │ (extend) │ │
│ └──────────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘Project Structure
ai-usage-metrics-mcp/
├── src/
│ ├── index.ts # MCP server entry point
│ ├── schema.ts # TypeScript type definitions
│ ├── store.ts # Storage interface & in-memory implementation
│ ├── tools/
│ │ ├── index.ts # Tool exports
│ │ ├── log-model-call.ts
│ │ ├── search-model-calls.ts
│ │ ├── list-sessions.ts
│ │ ├── get-session-calls.ts
│ │ └── get-aggregate-metrics.ts
│ └── resources/
│ └── index.ts # Resource handlers
├── tests/
│ ├── fixtures.ts # Test data factories
│ ├── store.test.ts
│ ├── resources.test.ts
│ ├── integration.test.ts
│ └── tools/
│ └── *.test.ts
├── package.json
├── tsconfig.json
└── vitest.config.tsExtending the Server
Adding PostgreSQL Support
The MetricsStore interface makes it straightforward to add database support:
// src/postgres-store.ts
import { Pool } from 'pg';
import { MetricsStore, ModelCallLog, SessionSummary, ... } from './schema.js';
export class PostgresMetricsStore implements MetricsStore {
private pool: Pool;
constructor(connectionString: string) {
this.pool = new Pool({ connectionString });
}
async logCall(input: LogCallInput): Promise<ModelCallLog> {
const id = crypto.randomUUID();
const timestamp = new Date().toISOString();
await this.pool.query(
`INSERT INTO model_calls (id, timestamp, project, ...) VALUES ($1, $2, $3, ...)`,
[id, timestamp, input.project, ...]
);
return { id, timestamp, ...input };
}
async getCall(id: string): Promise<ModelCallLog | null> {
const result = await this.pool.query(
'SELECT * FROM model_calls WHERE id = $1',
[id]
);
return result.rows[0] || null;
}
// Implement remaining methods...
}Database Schema (PostgreSQL)
CREATE TABLE model_calls (
id UUID PRIMARY KEY,
timestamp TIMESTAMPTZ NOT NULL,
project VARCHAR(255) NOT NULL,
environment VARCHAR(50) NOT NULL,
user_id VARCHAR(255),
session_id VARCHAR(255),
model_name VARCHAR(255) NOT NULL,
model_version VARCHAR(50),
prompt_type VARCHAR(50),
input_messages JSONB NOT NULL,
output_messages JSONB NOT NULL,
retrieved_context JSONB,
latency_ms INTEGER,
tokens_in INTEGER,
tokens_out INTEGER,
safety JSONB,
metrics JSONB,
trace_id VARCHAR(255),
request_id VARCHAR(255),
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Indexes for common queries
CREATE INDEX idx_model_calls_project ON model_calls(project);
CREATE INDEX idx_model_calls_environment ON model_calls(environment);
CREATE INDEX idx_model_calls_session_id ON model_calls(session_id);
CREATE INDEX idx_model_calls_user_id ON model_calls(user_id);
CREATE INDEX idx_model_calls_timestamp ON model_calls(timestamp);
CREATE INDEX idx_model_calls_model_name ON model_calls(model_name);Adding Custom Metrics
Log custom metrics with any model call:
await mcpClient.callTool("log_model_call", {
project: "my-app",
modelName: "gpt-4",
inputMessages: [...],
outputMessages: [...],
metrics: {
// Custom numeric metrics
confidence_score: 0.95,
relevance_score: 0.87,
response_quality: 4.5,
// Custom string metrics
intent_category: "information_query",
sentiment: "neutral",
language: "en"
}
});Development
Prerequisites
- Node.js 18+
- npm or pnpm
Setup
# Install dependencies
npm install
# Build
npm run build
# Run in development mode (watch)
npm run devCode Style
The project uses TypeScript with strict mode enabled. Key conventions:
- All types defined in
src/schema.ts - Tool implementations in separate files under
src/tools/ - Input validation using Zod schemas
- Async/await for all asynchronous operations
Testing
The project includes a comprehensive test suite with 220+ tests:
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run with coverage report
npm run test:coverageTest Coverage
| Category | Coverage | |----------|----------| | Statements | 97%+ | | Branches | 98%+ | | Functions | 95%+ | | Lines | 97%+ |
Test Structure
- Unit Tests: Individual components (store, tools, resources)
- Integration Tests: End-to-end workflows
- Edge Cases: Error handling, boundary conditions, concurrent operations
Troubleshooting
Common Issues
Server not connecting:
- Verify the path in your MCP client configuration is absolute
- Check that the server is built (
npm run build) - Ensure Node.js 18+ is installed
Data not persisting:
- The default in-memory store loses data on restart
- Implement a database-backed store for persistence
High memory usage:
- The in-memory store grows unbounded
- Implement pagination or data expiration for production use
Debug Mode
Enable debug logging by setting the DEBUG environment variable:
DEBUG=mcp:* node dist/index.jsPlatform-Specific Issues
- Ensure the config file is valid JSON (no trailing commas)
- Use absolute paths, not relative paths
- Restart Claude Desktop after config changes
- Check the Claude Desktop logs for errors
- Verify the config file location (
~/.cursor/mcp.json) - Check Cursor's MCP status in the command palette
- Ensure the server process can be executed by Cursor
- Open Windsurf's Cascade settings
- Verify the MCP server is listed and enabled
- Check for any error messages in the Cascade panel
License
MIT License - see LICENSE file for details.
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
Support
- Issues: Report bugs and request features via GitHub Issues
- Documentation: See the MCP specification
- Discussions: Join the conversation on GitHub Discussions
Made with ❤️ for the AI developer community
