@ai-sdk/mistral
v0.0.8
Published
The [Mistral](https://mistral.ai/) provider for the [Vercel AI SDK](https://sdk.vercel.ai/docs) contains language model support for the Mistral chat API. It creates language model objects that can be used with the `generateText`, `streamText`, `generateOb
Downloads
754
Readme
Vercel AI SDK - Mistral Provider
The Mistral provider for the Vercel AI SDK contains language model support for the Mistral chat API.
It creates language model objects that can be used with the generateText
, streamText
, generateObject
, and streamObject
AI functions.
Setup
The Mistral provider is available in the @ai-sdk/mistral
module. You can install it with
npm i @ai-sdk/mistral
Provider Instance
You can import the default provider instance mistral
from @ai-sdk/mistral
:
import { mistral } from '@ai-sdk/mistral';
If you need a customized setup, you can import createMistral
from @ai-sdk/mistral
and create a provider instance with your settings:
import { createMistral } from '@ai-sdk/mistral';
const mistral = createMistral({
// custom settings
});
You can use the following optional settings to customize the Mistral provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
https://api.mistral.ai/v1
.apiKey string
API key that is being send using the
Authorization
header. It defaults to theMISTRAL_API_KEY
environment variable.headers Record<string,string>
Custom headers to include in the requests.
Models
You can create models that call the Mistral chat API using provider instance.
The first argument is the model id, e.g. mistral-large-latest
.
Some Mistral chat models support tool calls.
const model = mistral('mistral-large-latest');
Mistral chat models also support additional model settings that are not part of the standard call settings. You can pass them as an options argument:
const model = mistral('mistral-large-latest', {
safePrompt: true, // optional safety prompt injection
});
The following optional settings are available for Mistral models:
safePrompt boolean
Whether to inject a safety prompt before all conversations.
Defaults to
false
.