npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

inference-activity-axios

v1.2.3

Published

Axios interceptors for tracking inference activities on Heroku AI

Readme

inference-activity-axios

Axios interceptors for tracking inference activities on Heroku AI. This package helps you monitor and log API calls to inference endpoints while automatically redacting sensitive information from requests and responses.

Features

  • Automatically tracks request/response times
  • Logs API call activities to a specified endpoint
  • Redacts sensitive information from:
    • Chat completion messages
    • Embedding inputs
    • Image generation prompts
  • Handles errors gracefully
  • Zero configuration needed beyond environment variables
  • Support for streaming response

Installation

npm install inference-activity-axios

Usage

const axios = require('axios');
const { applyInterceptors } = require('inference-activity-axios');

// Create your axios instance
const api = axios.create({
    baseURL: process.env.INFERENCE_URL,
    headers: {
        'Authorization': `Bearer ${process.env.INFERENCE_KEY}`,
        'Content-Type': 'application/json'
    }
});

// Apply the interceptors to start tracking
applyInterceptors(api);

Environment Variables

The package requires the following environment variables:

  • For Heroku Inference API:

    • INFERENCE_URL: Base URL for the inference API
    • INFERENCE_KEY: API key for authentication
    • INFERENCE_MODEL_ID: Model ID to use for inference
  • For Activity Logging:

    • INFERENCE_ACTIVITY_URL: URL where activity logs will be sent
    • INFERENCE_ACTIVITY_KEY: API key for authentication with the activity logging service

You can set them up using:

# For inference API
export INFERENCE_URL=$(heroku config:get -a $APP_NAME INFERENCE_URL)
export INFERENCE_KEY=$(heroku config:get -a $APP_NAME INFERENCE_KEY)
export INFERENCE_MODEL_ID=$(heroku config:get -a $APP_NAME INFERENCE_MODEL_ID)

# For activity logging
export INFERENCE_ACTIVITY_URL=$(heroku config:get -a $APP_NAME INFERENCE_ACTIVITY_URL)
export INFERENCE_ACTIVITY_KEY=$(heroku config:get -a $APP_NAME INFERENCE_ACTIVITY_KEY)

Activity Logging

When activity logging is enabled (by setting INFERENCE_ACTIVITY_URL and INFERENCE_ACTIVITY_KEY), the following information is logged for each API call:

{
  timestamp: Date.now(),
  response_time: duration,      // Request duration in milliseconds
  status_code: response.status, // HTTP status code
  status_message: statusText,   // HTTP status message
  request: {
    method: 'POST',
    url: '/v1/chat/completions',
    params: {},
    body: {                     // Sensitive data is redacted
      model: 'gpt-3.5-turbo',
      messages: '[REDACTED]',
      temperature: 0.5
    }
  },
  response: {                   // Sensitive data is redacted
    headers: {...},
    data: {
      choices: [{
        message: {
          content: '[REDACTED]'
        }
      }]
    }
  }
}

Example: Chat Completion

const axios = require('axios');
const { applyInterceptors } = require('inference-activity-axios');

const api = axios.create({
    baseURL: process.env.INFERENCE_URL,
    headers: {
        'Authorization': `Bearer ${process.env.INFERENCE_KEY}`,
        'Content-Type': 'application/json'
    }
});

// Apply the interceptors to start tracking
applyInterceptors(api);

const payload = {
    model: process.env.INFERENCE_MODEL_ID,
    messages: [
        { role: "user", content: "Hello!" },
        { role: "assistant", content: "Hi there! How can I assist you today?" },
        { role: "user", content: "Why is Heroku so cool?" }
    ],
    temperature: 0.5,
    max_tokens: 100,
    stream: false
};

async function generateChatCompletion(payload) {
    try {
        const response = await api.post('/v1/chat/completions', payload);
        console.log("Chat Completion:", response.data.choices[0].message.content);
    } catch (error) {
        console.error("Error generating chat completion:", error.message);
    }
}

generateChatCompletion(payload);

Example: Chat Completion with Streaming

const axios = require('axios');
const { applyInterceptors } = require('inference-activity-axios');

const api = axios.create({
	baseURL: process.env.INFERENCE_URL,
	headers: {
		'Authorization': `Bearer ${process.env.INFERENCE_KEY}`,
		'Content-Type': 'application/json'
	}
});

// Apply the interceptors to start tracking
applyInterceptors(api);

const payload = {
	model: process.env.INFERENCE_MODEL_ID,
	messages: [
		{ role: "user", content: "Hello!" },
		{ role: "assistant", content: "Hi there! How can I assist you today?" },
		{ role: "user", content: "Why is Heroku so cool?" }
	],
	temperature: 0.5,
	max_tokens: 100,
	stream: true
};

async function generateChatCompletion(payload) {
	try {
		const response = await api.post('/v1/chat/completions', payload, { responseType: 'stream' });
		response.data.on('data', chunk => {
			process.stdout.write(chunk);
		});
	} catch (error) {
		console.error("Error generating chat completion:", error.message);
	}
}

generateChatCompletion(payload);

Redaction Rules

The package automatically redacts sensitive information:

  • Chat Completions (/v1/chat/completions):

    • Request: message contents
    • Response: generated message content
  • Embeddings (/v1/embeddings):

    • Request: input text
    • Response: embedding vectors
  • Image Generation (/v1/images/generations):

    • Request: prompt and negative_prompt
    • Response: b64_json and revised_prompt

License

MIT