@cool-ai/beach-llm-mastra
v0.1.2
Published
Mastra adapter for Beach's LLMProvider — wraps a @mastra/core Agent so its model + tool runtime drives Beach actor turns.
Readme
@cool-ai/beach-llm-mastra
Mastra adapter for Beach's LLMProvider. Wraps a @mastra/core Agent so its model + tool runtime drives Beach actor turns.
Home: cool-ai.org · Documentation: cool-ai.org/docs
Why this exists
Beach's LLMProvider interface is the seam where actor turns meet a model. Beach ships two implementations directly — AnthropicProvider (native Anthropic SDK, including extended-thinking) and VercelAIProvider (the Vercel AI SDK). This package adds a third: a wrap of a Mastra Agent. Consumers who have already built up a Mastra Agent — with its memory, instructions, tracing, lifecycle hooks — can drop it into Beach without rewriting their agent runtime.
The wrap is intentionally narrow. Beach owns the turn loop — respond() discipline, the ToolRegistry, manifests, the canonical pipeline. Mastra contributes the model + tool runtime and whatever ergonomics the consumer wired onto its Agent. The two responsibilities meet at one method (agent.generate(messages, options)); Beach normalises the result into its own CompletionResult shape.
Install
npm install @cool-ai/beach-llm-mastra @mastra/core@mastra/core is a peer dependency. Consumers supply their own Mastra Agent instance — the connection-injection convention shared by every Mastra-quarter adapter. Beach's interior never imports @mastra/core.
Usage
import { Agent } from '@mastra/core/agent';
import { openai } from '@ai-sdk/openai';
import { MastraProvider } from '@cool-ai/beach-llm-mastra';
import { callActor, ToolRegistry } from '@cool-ai/beach-llm';
const agent = new Agent({
name: 'concierge',
model: openai('gpt-4o'),
// Leave instructions empty — Beach passes its own `system` per call.
instructions: '',
});
const provider = new MastraProvider(agent);
const result = await callActor({
config: {
id: 'concierge',
model: 'gpt-4o',
systemPrompt: '… your Beach actor system prompt …',
tools: ['fetch_user_bookings'],
},
provider,
registry: tools,
messages: [{ role: 'user', content: 'Suggest a quiet beach in Europe.' }],
sessionId: 'demo',
slotKey: 'main',
});The Mastra Agent's model, tools, and any other configuration come from Mastra. The actor's prompt, tool list, and turn lifecycle come from Beach.
Canonical wrap-pattern signature
Every Mastra-quarter adapter follows the same shape, locked at CR-167 as the canonical signature for the rest of the quarter:
| Concern | Where it lives |
|---|---|
| Underlying SDK / runtime client | Constructed by the consumer, passed to the adapter as a constructor argument. Beach never imports the SDK. |
| Adapter class | Implements one Beach interface (here: LLMProvider). One method per surface (here: complete()). |
| Configuration | Beach's per-call options (CompletionOptions) take precedence; adapter-level options refine the wrap (e.g. instructionsAuthority). |
| Peer dependency | The underlying SDK (@mastra/core) is a peer dep, never a direct dep. Consumers pin their own version. |
| Error handling | Errors from the underlying SDK propagate unchanged. Beach's actor loop owns retry / fallback. |
Subsequent adapters in the quarter (missives, channel, durable, evals, voice) follow the same shape. The canonical signature is the durable contract.
Configuration
new MastraProvider(agent, {
// Default: 'beach' — Beach's per-call `system` is forwarded as Mastra's
// `instructions` and overrides any Agent-level instructions for that call.
// Set to 'agent' to keep Agent-level instructions and ignore Beach's
// per-call `system` (rarely the right choice; Beach's actor configs
// expect their system prompt to be authoritative).
instructionsAuthority: 'beach',
});Message normalisation
Beach's Message[] shape (with text / thinking / tool_use / tool_result content parts) is mapped to Mastra's request shape (text / reasoning / tool-call / tool-result). Tool result messages are emitted as Mastra role: 'tool' messages with tool-result content parts. The bidirectional mapping mirrors VercelAIProvider's — the two SDKs share the underlying AI SDK semantics.
thinking blocks lose their signature on the way to Mastra (Mastra's reasoning content is plain text). Use AnthropicProvider for Anthropic models with extended thinking enabled.
Architectural commitments
- Beach's interior never imports
@mastra/core.@cool-ai/beach-core,@cool-ai/beach-session,@cool-ai/beach-llm, and the rest of Beach's runtime stay Mastra-independent. The adapter is the boundary. - No wrapping under any
@mastra/core/eepath. The adapter targets only Apache-2.0 surfaces. - Mastra Agents are the consumer's concern. The adapter expects a constructed Agent and does not expose a builder API.
Related
beach-llmREADME — theLLMProviderinterface this adapter implements.AnthropicProvider,VercelAIProvider— the other two LLM provider implementations Beach ships.- The Mastra-adapter quarter: this package is week 1 of the locked Option B ordering (LLM → missives → channel → durable → evals → voice). CR-166 tracks the quarter.
