@kyma-api/ai-sdk
v0.1.0
Published
Kyma API provider for Vercel AI SDK — 20+ open-source models (DeepSeek, Qwen, Llama, Gemma, Gemini, Kimi and more)
Downloads
53
Maintainers
Readme
@kyma-api/ai-sdk
Kyma API provider for the Vercel AI SDK.
Access 20+ state-of-the-art open-source models (DeepSeek, Qwen, Llama, Gemma, Gemini, Kimi, MiniMax and more) through a single OpenAI-compatible endpoint.
Installation
npm install @kyma-api/ai-sdk aiSetup
Get a free API key at kymaapi.com — includes free credits on signup.
export KYMA_API_KEY=ky-...Usage
import { kyma } from "@kyma-api/ai-sdk";
import { generateText } from "ai";
const { text } = await generateText({
model: kyma("deepseek-v3"),
prompt: "Explain quantum computing in one paragraph.",
});With streaming
import { kyma } from "@kyma-api/ai-sdk";
import { streamText } from "ai";
const { textStream } = await streamText({
model: kyma("llama-3.3-70b"),
messages: [{ role: "user", content: "Write a haiku about the ocean." }],
});
for await (const chunk of textStream) {
process.stdout.write(chunk);
}Custom provider instance
import { createKyma } from "@kyma-api/ai-sdk";
const kyma = createKyma({
apiKey: "ky-your-key-here",
});Smart aliases
Instead of memorizing model IDs, use built-in aliases:
kyma("best") // → qwen-3.6-plus (highest quality)
kyma("fast") // → gpt-oss-120b (fastest)
kyma("code") // → kimi-k2.5 (best for coding agents)
kyma("reasoning") // → deepseek-r1 (deep reasoning)
kyma("long-context") // → gemini-2.5-flash (1M context)
kyma("vision") // → gemma-4-31b (multimodal)Available Models
Tier 1 — Highest Quality
| Model ID | Description |
|---|---|
| qwen-3.6-plus | Alibaba's flagship. #1 on Kyma. |
| deepseek-v3 | GPT-5 class. Best value. |
| deepseek-r1 | Top reasoning model. 96% cheaper than o1. |
| kimi-k2.5 | Multimodal agentic. 262K context. |
| gemma-4-31b | Google's best open model. Multimodal. |
| qwen-3-32b | Top coding. Ultra-fast inference. |
| llama-3.3-70b | Most popular open model. |
| minimax-m2.5 | SWE-bench 80.2%. Top agentic coding. |
Tier 2 — High Quality
| Model ID | Description |
|---|---|
| kimi-k2 | Fast agentic coding on Groq. |
| minimax-m2.7 | Agentic productivity. |
| gpt-oss-120b | OpenAI open source 120B. |
| qwen-3-coder | Purpose-built for code. |
| gemma-4-26b-moe | Efficient MoE. Only 4B active params. |
| nemotron-3-super | NVIDIA 120B MoE. |
| gemini-2.5-flash | Google fast model. 1M context. |
| gemini-3-flash | Newest Gemini. 1M context. |
Tier 3 — Fast & Cheap
| Model ID | Description |
|---|---|
| llama-4-scout | MoE. 512K context. |
| gemini-2.5-flash-lite | Lightest Gemini. 1M context. |
| step-3.5-flash | StepFun reasoning. |
| glm-4.5-air | Zhipu AI. Strong reasoning. |
Pricing
All models available with free credits on signup. Pay-per-token after that — no subscriptions, no seats.
See full pricing at kymaapi.com.
Links
License
MIT
