@telnyx/ai-sdk-provider
v1.0.0
Published
Telnyx provider for Vercel AI SDK — LLM, Embeddings, TTS, STT in a single package
Readme
@telnyx/ai-sdk-provider
Telnyx provider for the Vercel AI SDK, with support for chat, embeddings, speech generation, and transcription.
Setup
npm install ai @telnyx/ai-sdk-provider zodzod is a required peer dependency. This package imports zod/v4 internally for speech and transcription model schemas, so consumers must install it even if they are not using tool calling.
Set your Telnyx API key:
export TELNYX_API_KEY="your_api_key_here"Usage
Chat (LLM)
import { telnyx } from '@telnyx/ai-sdk-provider';
import { generateText } from 'ai';
const { text } = await generateText({
model: telnyx('Qwen/Qwen3-235B-A22B'),
prompt: 'Explain WebRTC in simple terms',
});Streaming
import { telnyx } from '@telnyx/ai-sdk-provider';
import { streamText } from 'ai';
const result = streamText({
model: telnyx('moonshotai/Kimi-K2.5'),
prompt: 'Write a poem about the cloud',
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}Tool Calling
import { telnyx } from '@telnyx/ai-sdk-provider';
import { generateText, tool } from 'ai';
import { z } from 'zod/v4';
const { text } = await generateText({
model: telnyx('Qwen/Qwen3-235B-A22B'),
prompt: 'What is the weather in Santiago, Chile?',
tools: {
weather: tool({
description: 'Get the current weather for a location',
inputSchema: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
return { temperature: 18, condition: 'sunny', location };
},
}),
},
});Embeddings
import { telnyx } from '@telnyx/ai-sdk-provider';
import { embed, embedMany } from 'ai';
// Single embedding
const { embedding } = await embed({
model: telnyx.embeddingModel('thenlper/gte-large'),
value: 'What is WebRTC?',
});
// Batch embeddings
const { embeddings } = await embedMany({
model: telnyx.embeddingModel('thenlper/gte-large'),
values: ['What is WebRTC?', 'Explain SIP trunking'],
});Text-to-Speech (TTS)
import { telnyx } from '@telnyx/ai-sdk-provider';
import { experimental_generateSpeech as generateSpeech } from 'ai';
const { audio } = await generateSpeech({
model: telnyx.speechModel('tts-1'),
text: 'Hello, welcome to Telnyx!',
voice: 'Telnyx.NaturalHD.astra',
});You can also pass additional speech options:
import { telnyx } from '@telnyx/ai-sdk-provider';
import { experimental_generateSpeech as generateSpeech } from 'ai';
const { audio, warnings } = await generateSpeech({
model: telnyx.speechModel('tts-1'),
text: 'Hello, welcome to Telnyx!',
voice: 'Telnyx.KokoroTTS.af_alloy',
outputFormat: 'mp3',
});Provider-specific options are also supported via providerOptions.telnyx:
const { audio } = await generateSpeech({
model: telnyx.speechModel('tts-1'),
text: 'Hello, welcome to Telnyx!',
voice: 'Telnyx.NaturalHD.astra',
providerOptions: {
telnyx: {
output_format: 'linear16',
sample_rate: 24000,
language_code: 'en',
},
},
});Transcription (STT)
import { telnyx } from '@telnyx/ai-sdk-provider';
import { experimental_transcribe as transcribe } from 'ai';
import { readFile } from 'node:fs/promises';
const result = await transcribe({
model: telnyx.transcriptionModel('distil-whisper/distil-large-v2'),
audio: await readFile('./audio.wav'),
mediaType: 'audio/wav',
});
console.log(result.text);You can also use provider-specific options:
const result = await transcribe({
model: telnyx.transcriptionModel('openai/whisper-large-v3-turbo'),
audio: await readFile('./audio.wav'),
mediaType: 'audio/wav',
providerOptions: {
telnyx: {
language: 'en',
response_format: 'verbose_json',
},
},
});Custom Instance
import { createTelnyx } from '@telnyx/ai-sdk-provider';
const telnyx = createTelnyx({
apiKey: 'KEY_ID_SECRET',
baseURL: 'https://api.telnyx.com/v2/ai/openai',
fetch: customFetch, // optional custom fetch implementation
});Available Models
Chat Models
| Model ID | Best For |
|---|---|
| moonshotai/Kimi-K2.5 | General use, voice AI |
| zai-org/GLM-5.1-FP8 | Highest intelligence open-source |
| MiniMaxAI/MiniMax-M2.7 | Cost-effective, high intelligence |
| Qwen/Qwen3-235B-A22B | Function calling, reasoning |
Embedding Models
| Model ID | Dimensions |
|---|---|
| thenlper/gte-large | 1024 |
Speech Models
| Model ID | Description |
|---|---|
| tts-1 | Supported model identifier for AI SDK speech APIs |
| tts-1-hd | Supported model identifier for AI SDK speech APIs |
Note: for the current Telnyx TTS implementation, modelId is used as the AI SDK model identifier and for metadata/logging. The actual synthesis request is controlled by options such as voice, outputFormat, and provider-specific providerOptions.telnyx, not by a different upstream TTS model selected via modelId.
Transcription Models
Examples:
distil-whisper/distil-large-v2openai/whisper-large-v3-turbodeepgram/nova-3
Exports
import {
telnyx,
createTelnyx,
VERSION,
type TelnyxProviderSettings,
type TelnyxChatModelId,
type TelnyxEmbeddingModelId,
type TelnyxSpeechModelId,
type TelnyxTranscriptionModelId,
} from '@telnyx/ai-sdk-provider';Notes
telnyx('model-id')is an alias fortelnyx.languageModel('model-id')telnyx.speech()andtelnyx.speechModel()are equivalenttelnyx.transcription()andtelnyx.transcriptionModel()are equivalentimageModel()is not supported and throwsNoSuchModelError- The default
telnyxexport is lazy, so importing it does not requireTELNYX_API_KEYuntil first use - Requires
ai@6for speech and transcription APIs
License
MIT
