@catalystlabs/ai-sdk-provider-meta-llama
v0.1.0
Published
Meta Llama 4.0 provider for Vercel AI SDK - Access Llama models through the official Meta API
Maintainers
Readme
title: Meta Llama description: Meta Llama Provider for the AI SDK
Meta Llama
Meta Llama is the official API for Meta's Llama models, providing access to the latest Llama 4.0 series with advanced capabilities. The Meta Llama provider for the AI SDK enables seamless integration with these powerful models:
- Llama 4.0 Models: Access to Maverick-17B-128E and Scout-17B-16E with mixture-of-experts architecture
- Multi-Modal Support: Native image understanding capabilities in Llama 4 models
- Tool Calling: Built-in function calling for structured outputs
- High Performance: Optimized for both speed and quality
- Extended Context: Support for long-form conversations and documents
- Real-Time Streaming: Token streaming for responsive applications
Learn more about Meta Llama's capabilities in the Meta Llama Documentation.
Setup
The Meta Llama provider is available in the @catalystlabs/ai-sdk-provider-meta-llama module. You can install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}>
bash
pnpm add @catalystlabs/ai-sdk-provider-meta-llama
bash
npm install @catalystlabs/ai-sdk-provider-meta-llama
bash
yarn add @catalystlabs/ai-sdk-provider-meta-llama
bash
bun add @catalystlabs/ai-sdk-provider-meta-llama
Provider Instance
To create a Meta Llama provider instance, use the createMetaLlama function:
import { createMetaLlama } from '@catalystlabs/ai-sdk-provider-meta-llama';
const metaLlama = createMetaLlama({
apiKey: 'YOUR_LLAMA_API_KEY',
});You can obtain your Meta Llama API key from the Meta Llama Dashboard.
Language Models
Meta Llama supports chat models. Use the provider instance with model IDs directly:
// Latest Llama 4 model
const model = metaLlama('Llama-4-Maverick-17B-128E-Instruct-FP8');
// Efficient variant
const efficientModel = metaLlama('Llama-4-Scout-17B-16E-Instruct');You can find the full list of available models in the Meta Llama Models documentation.
Examples
Here are examples of using Meta Llama with the AI SDK:
generateText
import { createMetaLlama } from '@catalystlabs/ai-sdk-provider-meta-llama';
import { generateText } from 'ai';
const metaLlama = createMetaLlama({
apiKey: 'YOUR_LLAMA_API_KEY',
});
const { text } = await generateText({
model: metaLlama('Llama-4-Maverick-17B-128E-Instruct-FP8'),
prompt: 'What is Meta Llama?',
});
console.log(text);streamText
import { createMetaLlama } from '@catalystlabs/ai-sdk-provider-meta-llama';
import { streamText } from 'ai';
const metaLlama = createMetaLlama({
apiKey: 'YOUR_LLAMA_API_KEY',
});
const result = streamText({
model: metaLlama('Llama-4-Maverick-17B-128E-Instruct-FP8'),
prompt: 'Write a short story about AI.',
});
for await (const chunk of result.textStream) {
console.log(chunk);
}Advanced Features
Meta Llama offers several advanced features to enhance your AI applications:
Multi-Modal Processing: Llama 4 models can understand and analyze images alongside text inputs.
Tool Calling: Execute functions and retrieve structured data with built-in tool support.
Mixture of Experts: Llama 4 Maverick uses 128 experts for superior performance across diverse tasks.
Token Efficiency: Optimized tokenization for better context utilization and cost management.
Safety Features: Built-in safety measures and content filtering aligned with Meta's responsible AI principles.
For more information about these features and advanced configuration options, visit the Meta Llama Documentation.
