npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@jerome-benoit/sap-ai-provider

v4.2.5

Published

SAP AI Core provider for AI SDK (powered by @sap-ai-sdk/orchestration)

Readme

SAP AI Core Provider for Vercel AI SDK

npm License: Apache-2.0 Vercel AI SDK Language Model Embedding Model

A community provider for SAP AI Core that integrates seamlessly with the Vercel AI SDK. Built on top of the official @sap-ai-sdk/orchestration package, this provider enables you to use SAP's enterprise-grade AI models through the familiar Vercel AI SDK interface.

Table of Contents

Features

  • 🔐 Simplified Authentication - Uses SAP AI SDK's built-in credential handling
  • 🎯 Tool Calling Support - Full tool/function calling capabilities
  • 🧠 Reasoning-Safe by Default - Assistant reasoning parts are not forwarded unless enabled
  • 🖼️ Multi-modal Input - Support for text and image inputs
  • 📡 Streaming Support - Real-time text generation with structured V3 blocks
  • 🔒 Data Masking - Built-in SAP DPI integration for privacy
  • 🛡️ Content Filtering - Azure Content Safety and Llama Guard support
  • 🔧 TypeScript Support - Full type safety and IntelliSense
  • 🎨 Multiple Models - Support for GPT-4, Claude, Gemini, Nova, and more
  • Language Model V3 - Latest Vercel AI SDK specification with enhanced streaming
  • 📊 Text Embeddings - Generate vector embeddings for RAG and semantic search

Quick Start

npm install @mymediset/sap-ai-provider ai
import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
import { generateText } from "ai";
import { APICallError } from "@ai-sdk/provider";

// Create provider (authentication via AICORE_SERVICE_KEY env var)
const provider = createSAPAIProvider();

try {
  // Generate text with gpt-4o
  const result = await generateText({
    model: provider("gpt-4o"),
    prompt: "Explain quantum computing in simple terms.",
  });

  console.log(result.text);
} catch (error) {
  if (error instanceof APICallError) {
    console.error("SAP AI Core API error:", error.message);
    console.error("Status:", error.statusCode);
  } else {
    console.error("Unexpected error:", error);
  }
}

Note: Requires AICORE_SERVICE_KEY environment variable. See Environment Setup for configuration.

Quick Reference

| Task | Code Pattern | Documentation | | ------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------- | | Install | npm install @mymediset/sap-ai-provider ai | Installation | | Auth Setup | Add AICORE_SERVICE_KEY to .env | Environment Setup | | Create Provider | createSAPAIProvider() or use sapai | Provider Creation | | Text Generation | generateText({ model: provider("gpt-4o"), prompt }) | Basic Usage | | Streaming | streamText({ model: provider("gpt-4o"), prompt }) | Streaming | | Tool Calling | generateText({ tools: { myTool: tool({...}) } }) | Tool Calling | | Error Handling | catch (error instanceof APICallError) | API Reference | | Choose Model | See 80+ models (GPT, Claude, Gemini, Llama) | Models | | Embeddings | embed({ model: provider.embedding("text-embedding-ada-002") }) | Embeddings |

Installation

Requirements: Node.js 18+ and Vercel AI SDK 6.0+

npm install @mymediset/sap-ai-provider ai

Or with other package managers:

# Yarn
yarn add @mymediset/sap-ai-provider ai

# pnpm
pnpm add @mymediset/sap-ai-provider ai

Provider Creation

You can create an SAP AI provider in two ways:

Option 1: Factory Function (Recommended for Custom Configuration)

import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@mymediset/sap-ai-provider";

const provider = createSAPAIProvider({
  resourceGroup: "production",
  deploymentId: "your-deployment-id", // Optional
});

Option 2: Default Instance (Quick Start)

import "dotenv/config"; // Load environment variables
import { sapai } from "@mymediset/sap-ai-provider";
import { generateText } from "ai";

// Use directly with auto-detected configuration
const result = await generateText({
  model: sapai("gpt-4o"),
  prompt: "Hello!",
});

The sapai export provides a convenient default provider instance with automatic configuration from environment variables or service bindings.

Authentication

Authentication is handled automatically by the SAP AI SDK using the AICORE_SERVICE_KEY environment variable.

Quick Setup:

  1. Create a .env file: cp .env.example .env
  2. Add your SAP AI Core service key JSON to AICORE_SERVICE_KEY
  3. Import in code: import "dotenv/config";

For complete setup instructions, SAP BTP deployment, troubleshooting, and advanced scenarios, see the Environment Setup Guide.

Basic Usage

Text Generation

Complete example: examples/example-generate-text.ts

const result = await generateText({
  model: provider("gpt-4o"),
  prompt: "Write a short story about a robot learning to paint.",
});
console.log(result.text);

Run it: npx tsx examples/example-generate-text.ts

Chat Conversations

Complete example: examples/example-simple-chat-completion.ts

Note: Assistant reasoning parts are dropped by default. Set includeReasoning: true on the model settings if you explicitly want to forward them.

const result = await generateText({
  model: provider("anthropic--claude-3.5-sonnet"),
  messages: [
    { role: "system", content: "You are a helpful coding assistant." },
    {
      role: "user",
      content: "How do I implement binary search in TypeScript?",
    },
  ],
});

Run it: npx tsx examples/example-simple-chat-completion.ts

Streaming Responses

Complete example: examples/example-streaming-chat.ts

const result = streamText({
  model: provider("gpt-4o"),
  prompt: "Explain machine learning concepts.",
});

for await (const delta of result.textStream) {
  process.stdout.write(delta);
}

Run it: npx tsx examples/example-streaming-chat.ts

Model Configuration

import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
import { generateText } from "ai";

const provider = createSAPAIProvider();

const model = provider("gpt-4o", {
  // Optional: include assistant reasoning parts (chain-of-thought).
  // Best practice is to keep this disabled.
  includeReasoning: false,
  modelParams: {
    temperature: 0.3,
    maxTokens: 2000,
    topP: 0.9,
  },
});

const result = await generateText({
  model,
  prompt: "Write a technical blog post about TypeScript.",
});

Embeddings

Generate vector embeddings for RAG (Retrieval-Augmented Generation), semantic search, and similarity matching.

Complete example: examples/example-embeddings.ts

import "dotenv/config"; // Load environment variables
import { createSAPAIProvider } from "@mymediset/sap-ai-provider";
import { embed, embedMany } from "ai";

const provider = createSAPAIProvider();

// Single embedding
const { embedding } = await embed({
  model: provider.embedding("text-embedding-ada-002"),
  value: "What is machine learning?",
});

// Multiple embeddings
const { embeddings } = await embedMany({
  model: provider.embedding("text-embedding-3-small"),
  values: ["Hello world", "AI is amazing", "Vector search"],
});

Run it: npx tsx examples/example-embeddings.ts

Common embedding models:

  • text-embedding-ada-002 - OpenAI Ada v2 (cost-effective)
  • text-embedding-3-small - OpenAI v3 small (balanced)
  • text-embedding-3-large - OpenAI v3 large (highest quality)

Note: Model availability depends on your SAP AI Core tenant configuration.

For complete embedding API documentation, see API Reference: Embeddings.

Supported Models

This provider supports all models available through SAP AI Core Orchestration service, including:

Popular models:

  • OpenAI: gpt-4o, gpt-4o-mini, gpt-4.1, o1, o3, o4-mini (recommended for multi-tool apps)
  • Anthropic Claude: anthropic--claude-3.5-sonnet, anthropic--claude-4-opus
  • Google Gemini: gemini-2.5-pro, gemini-2.0-flash

⚠️ Important: Google Gemini models have a 1 tool limit per request.

  • Amazon Nova: amazon--nova-pro, amazon--nova-lite
  • Open Source: mistralai--mistral-large-instruct, meta--llama3.1-70b-instruct

Note: Model availability depends on your SAP AI Core tenant configuration, region, and subscription.

To discover available models in your environment:

curl "https://<AI_API_URL>/v2/lm/deployments" -H "Authorization: Bearer $TOKEN"

For complete model details, capabilities comparison, and limitations, see API Reference: SAPAIModelId.

Advanced Features

The following helper functions are exported by this package for convenient configuration of SAP AI Core features. These builders provide type-safe configuration for data masking, content filtering, grounding, and translation modules.

Tool Calling

Note on Terminology: This documentation uses "tool calling" (Vercel AI SDK convention), equivalent to "function calling" in OpenAI documentation. Both terms refer to the same capability of models invoking external functions.

📖 Complete guide: API Reference - Tool Calling
Complete example: examples/example-chat-completion-tool.ts

const weatherTool = tool({
  description: "Get weather for a location",
  inputSchema: z.object({ location: z.string() }),
  execute: (args) => `Weather in ${args.location}: sunny, 72°F`,
});

const result = await generateText({
  model: provider("gpt-4o"),
  prompt: "What's the weather in Tokyo?",
  tools: { getWeather: weatherTool },
  maxSteps: 3,
});

Run it: npx tsx examples/example-chat-completion-tool.ts

⚠️ Important: Gemini models support only 1 tool per request. For multi-tool applications, use GPT-4o, Claude, or Amazon Nova models. See API Reference - Tool Calling for complete model comparison.

Multi-modal Input (Images)

Complete example: examples/example-image-recognition.ts

const result = await generateText({
  model: provider("gpt-4o"),
  messages: [
    {
      role: "user",
      content: [
        { type: "text", text: "What do you see in this image?" },
        { type: "image", image: new URL("https://example.com/image.jpg") },
      ],
    },
  ],
});

Run it: npx tsx examples/example-image-recognition.ts

Data Masking (SAP DPI)

Use SAP's Data Privacy Integration to mask sensitive data:

Complete example: examples/example-data-masking.ts
Complete documentation: API Reference - Data Masking

import { buildDpiMaskingProvider } from "@mymediset/sap-ai-provider";

const dpiConfig = buildDpiMaskingProvider({
  method: "anonymization",
  entities: ["profile-email", "profile-person", "profile-phone"],
});

Run it: npx tsx examples/example-data-masking.ts

Content Filtering

import "dotenv/config"; // Load environment variables
import { buildAzureContentSafetyFilter, createSAPAIProvider } from "@mymediset/sap-ai-provider";

const provider = createSAPAIProvider({
  defaultSettings: {
    filtering: {
      input: {
        filters: [
          buildAzureContentSafetyFilter("input", {
            hate: "ALLOW_SAFE",
            violence: "ALLOW_SAFE_LOW_MEDIUM",
          }),
        ],
      },
    },
  },
});

Complete documentation: API Reference - Content Filtering

Document Grounding (RAG)

Ground LLM responses in your own documents using vector databases.

Complete example: examples/example-document-grounding.ts
Complete documentation: API Reference - Document Grounding

const provider = createSAPAIProvider({
  defaultSettings: {
    grounding: buildDocumentGroundingConfig({
      filters: [
        {
          id: "vector-store-1", // Your vector database ID
          data_repositories: ["*"], // Search all repositories
        },
      ],
      placeholders: {
        input: ["?question"],
        output: "groundingOutput",
      },
    }),
  },
});

// Queries are now grounded in your documents
const model = provider("gpt-4o");

Run it: npx tsx examples/example-document-grounding.ts

Translation

Automatically translate user queries and model responses.

Complete example: examples/example-translation.ts
Complete documentation: API Reference - Translation

const provider = createSAPAIProvider({
  defaultSettings: {
    translation: {
      // Translate user input from German to English
      input: buildTranslationConfig("input", {
        sourceLanguage: "de",
        targetLanguage: "en",
      }),
      // Translate model output from English to German
      output: buildTranslationConfig("output", {
        targetLanguage: "de",
      }),
    },
  },
});

// Model handles German input/output automatically
const model = provider("gpt-4o");

Run it: npx tsx examples/example-translation.ts

Provider Options (Per-Call Overrides)

Override constructor settings on a per-call basis using providerOptions. Options are validated at runtime with Zod schemas.

import { generateText } from "ai";

const result = await generateText({
  model: provider("gpt-4o"),
  prompt: "Explain quantum computing",
  providerOptions: {
    "sap-ai": {
      includeReasoning: true,
      modelParams: {
        temperature: 0.7,
        maxTokens: 1000,
      },
    },
  },
});

Complete documentation: API Reference - Provider Options

Configuration Options

The provider and models can be configured with various settings for authentication, model parameters, data masking, content filtering, and more.

Common Configuration:

  • name: Provider name (default: 'sap-ai'). Used as key in providerOptions/providerMetadata.
  • resourceGroup: SAP AI Core resource group (default: 'default')
  • deploymentId: Specific deployment ID (auto-resolved if not set)
  • modelParams: Temperature, maxTokens, topP, and other generation parameters
  • masking: SAP Data Privacy Integration (DPI) configuration
  • filtering: Content safety filters (Azure Content Safety, Llama Guard)

For complete configuration reference including all available options, types, and examples, see API Reference - Configuration.

Error Handling

The provider uses standard Vercel AI SDK error types for consistent error handling.

Quick Example:

import { APICallError, LoadAPIKeyError, NoSuchModelError } from "@ai-sdk/provider";

try {
  const result = await generateText({
    model: provider("gpt-4o"),
    prompt: "Hello world",
  });
} catch (error) {
  if (error instanceof LoadAPIKeyError) {
    // 401/403: Authentication or permission issue
    console.error("Authentication issue:", error.message);
  } else if (error instanceof NoSuchModelError) {
    // 404: Model or deployment not found
    console.error("Model not found:", error.modelId);
  } else if (error instanceof APICallError) {
    // Other API errors (400, 429, 5xx, etc.)
    console.error("API error:", error.statusCode, error.message);
    // SAP-specific metadata in responseBody
    const sapError = JSON.parse(error.responseBody ?? "{}");
    console.error("Request ID:", sapError.error?.request_id);
  }
}

Complete reference:

Troubleshooting

Quick Reference:

  • Authentication (401): Check AICORE_SERVICE_KEY or VCAP_SERVICES
  • Model not found (404): Confirm tenant/region supports the model ID
  • Rate limit (429): Automatic retry with exponential backoff
  • Streaming: Iterate textStream correctly; don't mix generateText and streamText

For comprehensive troubleshooting, see Troubleshooting Guide with detailed solutions for:

Error code reference table: API Reference - HTTP Status Codes

Performance

  • Prefer streaming (streamText) for long outputs to reduce latency and memory.
  • Tune modelParams carefully: lower temperature for deterministic results; set maxTokens to expected response size.
  • Use defaultSettings at provider creation for shared knobs across models to avoid per-call overhead.
  • Avoid unnecessary history: keep messages concise to reduce prompt size and cost.

Security

  • Do not commit .env or credentials; use environment variables and secrets managers.
  • Treat AICORE_SERVICE_KEY as sensitive; avoid logging it or including in crash reports.
  • Mask PII with DPI: configure masking.masking_providers using buildDpiMaskingProvider().
  • Validate and sanitize tool outputs before executing any side effects.

Debug Mode

  • Use the curl guide CURL_API_TESTING_GUIDE.md to diagnose raw API behavior independent of the SDK.
  • Log request IDs from error.responseBody (parse JSON for request_id) to correlate with backend traces.
  • Temporarily enable verbose logging in your app around provider calls; redact secrets.

Examples

The examples/ directory contains complete, runnable examples demonstrating key features:

| Example | Description | Key Features | | ----------------------------------- | --------------------------- | --------------------------------------- | | example-generate-text.ts | Basic text generation | Simple prompts, synchronous generation | | example-simple-chat-completion.ts | Simple chat conversation | System messages, user prompts | | example-chat-completion-tool.ts | Tool calling with functions | Weather API tool, function execution | | example-streaming-chat.ts | Streaming responses | Real-time text generation, SSE | | example-image-recognition.ts | Multi-modal with images | Vision models, image analysis | | example-data-masking.ts | Data privacy integration | DPI masking, anonymization | | example-document-grounding.ts | Document grounding (RAG) | Vector store, retrieval-augmented gen | | example-translation.ts | Input/output translation | Multi-language support, SAP translation | | example-embeddings.ts | Text embeddings | Vector generation, semantic similarity |

Running Examples:

npx tsx examples/example-generate-text.ts

Note: Examples require AICORE_SERVICE_KEY environment variable. See Environment Setup for configuration.

Migration Guides

Upgrading from v3.x to v4.x

Version 4.0 migrates from LanguageModelV2 to LanguageModelV3 specification (AI SDK 6.0+). See the Migration Guide for complete upgrade instructions.

Key changes:

  • Finish Reason: Changed from string to object (result.finishReason.unified)
  • Usage Structure: Nested format with detailed token breakdown (result.usage.inputTokens.total)
  • Stream Events: Structured blocks (text-start, text-delta, text-end) instead of simple deltas
  • Warning Types: Updated format with feature field for categorization

Impact by user type:

  • High-level API users (generateText/streamText): ✅ Minimal impact (likely no changes)
  • Direct provider users: ⚠️ Update type imports (LanguageModelV2LanguageModelV3)
  • Custom stream parsers: ⚠️ Update parsing logic for V3 structure

Upgrading from v2.x to v3.x

Version 3.0 standardizes error handling to use Vercel AI SDK native error types. See the Migration Guide for complete upgrade instructions.

Key changes:

  • SAPAIError removed → Use APICallError from @ai-sdk/provider
  • Error properties: error.codeerror.statusCode
  • Automatic retries for rate limits (429) and server errors (5xx)

Upgrading from v1.x to v2.x

Version 2.0 uses the official SAP AI SDK. See the Migration Guide for complete upgrade instructions.

Key changes:

  • Authentication via AICORE_SERVICE_KEY environment variable
  • Synchronous provider creation: createSAPAIProvider() (no await)
  • Helper functions from SAP AI SDK

For detailed migration instructions with code examples, see the complete Migration Guide.

Important Note

Third-Party Provider: This SAP AI Core provider (@mymediset/sap-ai-provider) is developed and maintained by mymediset, not by SAP SE. While it uses the official SAP AI SDK and integrates with SAP AI Core services, it is not an official SAP product.

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Resources

Documentation

Community

  • 🐛 Issue Tracker - Report bugs, request features, and ask questions

Related Projects

License

Apache License 2.0 - see LICENSE for details.