@standardagents/groq
v0.13.2
Published
Groq provider for Standard Agents
Readme
@standardagents/groq
Groq provider for Standard Agents.
This package wraps Groq's Chat Completions API behind the Standard Agents provider interface. It supports:
- model discovery
- Standard Agents message transformation to Groq chat payloads
- streaming and non-streaming generation
- local function tool calling on Groq models that support it
- reasoning controls on supported reasoning models
- Groq usage and cost calculation
Install
npm install @standardagents/groq @standardagents/specUsage
import { groq } from '@standardagents/groq';
const provider = groq({
apiKey: process.env.GROQ_API_KEY!,
});
const result = await provider.generate({
model: 'llama-3.3-70b-versatile',
messages: [
{ role: 'user', content: 'Give me three short product name ideas.' },
],
});
console.log(result.content);
console.log(result.usage?.cost);Factory
The package exports:
groq(config)- provider factoryGroqProvider- provider classgroqProviderOptions- Zod schema for provider-specific options
Factory config:
type ProviderFactoryConfig = {
apiKey: string;
baseUrl?: string;
timeout?: number;
};Supported Features
Feature support is model-specific.
Typical supported capabilities include:
- streaming text generation
- JSON mode on supported chat models
- local function tool calling on supported models
- reasoning controls on supported reasoning models
Inspect the live model list and capabilities:
const models = await provider.getModels?.();
const capabilities = await provider.getModelCapabilities?.('openai/gpt-oss-120b');Tool Calling Notes
This package only sends Standard Agents local tools to Groq models that support local/custom tool calling.
Important edge case:
compound-betaandcompound-beta-minisupport Groq built-in/server-side tools, but not Standard Agents custom function tools- this provider therefore strips local tools for Compound models instead of forwarding unsupported tool definitions
If you need Standard Agents function tools, use one of the normal Groq function-calling models instead of Compound.
Reasoning Controls
The provider only sends reasoning_format, include_reasoning, and reasoning_effort when the target model is known to support Groq reasoning controls.
This avoids hard failures like:
reasoning_format is not supported with this model
Provider Options
Common Groq-specific provider options:
citation_optionsdisable_tool_validationinclude_reasoningreasoning_effortreasoning_formatservice_tiersearch_settingscompound_customdocuments
Example:
const result = await provider.generate({
model: 'openai/gpt-oss-120b',
messages: [{ role: 'user', content: 'Solve this carefully.' }],
providerOptions: {
reasoning_format: 'parsed',
reasoning_effort: 'high',
service_tier: 'performance',
},
});Icons
getIcon(modelId?) returns either the Groq provider icon or a model-lab icon derived from the model identifier when possible, such as:
openai/gpt-oss-120b-> OpenAI iconqwen/qwen3-32b-> Qwen iconllama-*-> Meta icongemma*-> Google icon
Debugging
Use inspectRequest() to see the final Groq-native request body after model-specific transforms are applied:
const inspected = await provider.inspectRequest?.({
model: 'llama-3.1-8b-instant',
messages: [{ role: 'user', content: 'hello' }],
});
console.log(inspected?.body);