mlflow-vercel
v0.1.1
Published
Vercel AI SDK integration package for MLflow Tracing
Maintainers
Readme
MLflow Typescript SDK - Vercel AI SDK
Seamlessly integrate MLflow Tracing with Vercel AI SDK to automatically trace your LLM calls via the AI SDK.
| Package | NPM | Description |
| ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------- |
| mlflow-vercel | | Auto-instrumentation integration for Vercel AI SDK. |
Installation
npm install mlflow-vercelThe package includes the mlflow-tracing package and ai package as peer dependencies. Depending on your package manager, you may need to install these two packages separately.
Quickstart
Start MLflow Tracking Server. If you have a local Python environment, you can run the following command:
pip install mlflow
mlflow server --backend-store-uri sqlite:///mlruns.db --port 5000If you don't have Python environment locally, MLflow also supports Docker deployment or managed services. See Self-Hosting Guide for getting started.
Instantiate MLflow SDK in your application:
import * as mlflow from 'mlflow-tracing';
mlflow.init({
trackingUri: 'http://localhost:5000',
experimentId: '<experiment-id>'
});Call the AI SDK as usual. Importantly, set the experimental_telemetry flag to true to enable tracing.
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'What is mlflow?',
// IMPORTANT
experimental_telemetry: { isEnabled: true }
});View traces in MLflow UI:

Documentation 📘
Official documentation for MLflow Typescript SDK can be found here.
License
This project is licensed under the Apache License 2.0.
