npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@databricks/ai-sdk-provider

v0.4.1

Published

Databricks provider for Vercel AI SDK

Readme

@databricks/ai-sdk-provider

Databricks provider for the Vercel AI SDK.

Features

  • 🚀 Support for all Databricks endpoint types:
    • Responses (agent/v1/responses) - Foundation model and agent responses API (docs)
    • Chat Completions (llm/v1/chat) - Foundation model chat completions API (docs)
    • Chat Agent (agent/v2/chat) - Legacy Databricks chat agent API (docs)
  • 🔄 Stream and non-stream (generate) support for all endpoint types
  • 🛠️ Tool calling and agent support
  • 🔐 Flexible authentication (bring your own tokens/headers)
  • 🎯 Full TypeScript support

Installation

npm install @databricks/ai-sdk-provider

Peer Dependencies

This package requires the following peer dependencies:

npm install @ai-sdk/provider @ai-sdk/provider-utils

To use the provider with AI SDK functions like generateText or streamText, also install:

npm install ai

Quick Start

import { createDatabricksProvider } from '@databricks/ai-sdk-provider'
import { generateText } from 'ai'

// Create provider with your workspace URL and authentication
const provider = createDatabricksProvider({
  baseURL: 'https://your-workspace.databricks.com/serving-endpoints',
  headers: {
    Authorization: `Bearer ${token}`,
  },
})

// Use the Responses endpoint
const model = provider.responses('your-agent-endpoint')

const result = await generateText({
  model,
  prompt: 'Hello, how are you?',
})

console.log(result.text)

Authentication

The provider requires you to pass authentication headers:

const provider = createDatabricksProvider({
  baseURL: 'https://your-workspace.databricks.com/serving-endpoints',
  headers: {
    Authorization: `Bearer ${token}`,
  },
})

API Reference

Main Export

createDatabricksProvider(settings)

Creates a Databricks provider instance.

Parameters:

  • settings.baseURL (string, required): Base URL for the Databricks API calls
  • settings.headers (object, optional): Custom headers to include in requests
  • settings.provider (string, optional): Provider name (defaults to "databricks")
  • settings.fetch (function, optional): Custom fetch implementation
  • settings.formatUrl (function, optional): Optional function to format the URL
  • settings.useRemoteToolCalling (boolean, optional): Enable remote tool calling mode (defaults to false). See Remote Tool Calling below.

Returns: DatabricksProvider with three model creation methods:

  • responses(modelId: string): Create a Responses model
  • chatCompletions(modelId: string): Create a Chat Completions model
  • chatAgent(modelId: string): Create a Chat Agent model

Remote Tool Calling

The useRemoteToolCalling option controls how tool calls from Databricks agents are handled. When enabled, tool calls are marked as dynamic: true and providerExecuted: true, which tells the AI SDK that:

  1. Dynamic: The tools are not pre-registered - the agent decides which tools to call at runtime
  2. Provider-executed: The tools are executed remotely by Databricks, not by your application

When to use useRemoteToolCalling: true

Enable this option when your Databricks agent handles tool execution internally:

  • Databricks Agents with built-in tools: Agents that use tools like Python execution, SQL queries, or other Databricks-managed tools
  • Agents on Apps: When deploying agents that manage their own tool execution
  • MCP (Model Context Protocol) integrations: When tools are executed via MCP servers managed by Databricks
const provider = createDatabricksProvider({
  baseURL: 'https://your-workspace.databricks.com/serving-endpoints',
  headers: { Authorization: `Bearer ${token}` },
  useRemoteToolCalling: true, // Enable for Databricks-managed tool execution
})

When NOT to use useRemoteToolCalling

Keep this option disabled (the default) when:

  • You define and execute tools locally: Your application registers tools with the AI SDK and handles their execution
  • Standard chat completions: You're using the Chat Completions endpoint without agent features
  • Hybrid scenarios: You want to intercept tool calls and handle some locally
// Default behavior - you handle tool execution
const provider = createDatabricksProvider({
  baseURL: 'https://your-workspace.databricks.com/serving-endpoints',
  headers: { Authorization: `Bearer ${token}` },
  // useRemoteToolCalling defaults to false
})

const result = await generateText({
  model: provider.chatCompletions('my-model'),
  prompt: 'What is the weather?',
  tools: {
    getWeather: {
      description: 'Get weather for a location',
      parameters: z.object({ location: z.string() }),
      execute: async ({ location }) => {
        // Your local tool execution
        return fetchWeather(location)
      },
    },
  },
})

Example: Remote tool calling with Databricks agents

import { streamText } from 'ai'
import { createDatabricksProvider } from '@databricks/ai-sdk-provider'

const provider = createDatabricksProvider({
  baseURL: 'https://your-workspace.databricks.com/serving-endpoints',
  headers: { Authorization: `Bearer ${token}` },
  useRemoteToolCalling: true,
})

const model = provider.responses('my-agent-endpoint')

const result = streamText({
  model,
  messages: convertToModelMessages(uiMessages),
  // No need to pre-register tools - they're handled by Databricks
})

// Tool calls will have the actual tool name from Databricks
for await (const part of result.fullStream) {
  if (part.type === 'tool-call') {
    console.log(`Agent called: ${part.toolName}`)
    // Tool is executed remotely - result will come from Databricks
  }
}

Examples

Responses Endpoint

const model = provider.responses('my-responses-agent')

const result = await generateText({
  model,
  prompt: 'Analyze this data...',
})

console.log(result.text)

Links

Contributing

This package is part of the databricks-ai-bridge monorepo.