model-context-playground
v0.27.0
Published
A React component for exploring MCP servers with an agentic chat interface.
Maintainers
Readme
Model Context Playground
A React component for interacting with Model Context Servers within a chat interface.

Features
- Chat interface for interacting with language models
- Call MCP tools directly
- OpenAI and Ollama support
- Plugin architecture for adding custom providers
- HTTP/stdio MCP support with live tool-call events
- Granular tool permissioning and human-in-the-loop approval
- Support for file attachments (images, PDFs, etc.)
- Slash command palette for cached MCP prompts
- Provide custom slash commands via props
- Import/export browser settings as JSON backups
- Dark mode and theming support
Installation
npm install model-context-playgroundUsage
Browser
import React from 'react'
import ReactDOM from 'react-dom/client'
import 'model-context-playground/styles.css'
import { ModelContextPlayground } from 'model-context-playground'
import { OpenAIProvider } from 'model-context-playground/providers/openai'
import { OllamaProvider } from 'model-context-playground/providers/ollama'
const loadProviders = async () => [
new OpenAIProvider(),
new OllamaProvider()
]
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<ModelContextPlayground getProviders={loadProviders} />
</React.StrictMode>,
)Electron
The playground includes an Electron integration module that bundles useful tools, prompts, and storage adapters for desktop apps. Here's a minimal example of how to embed the playground within an Electron renderer process:
import 'model-context-playground/styles.css'
import { ModelContextPlayground } from 'model-context-playground'
import { OpenAIProvider } from 'model-context-playground/providers/openai'
import { OllamaProvider } from 'model-context-playground/providers/ollama'
import { electronIntegration } from 'model-context-playground/electron'
const loadProviders = async () => [
new OpenAIProvider(),
new OllamaProvider()
]
const storageAdapter = electronIntegration.storage.createAdapter({
// defaults to '~/.mcp-desktop/config'
configDirectoryPath: '/var/tmp/playground-config'
})
export default function App() {
return (
<ModelContextPlayground
getProviders={loadProviders}
attachmentCallbackHandler={electronIntegration.storage.attachmentCallbackHandler}
storageAdapter={storageAdapter}
/>
)
}Options
ModelContextPlayground accepts the following props:
getProviders?: () => Promise<ProviderPlugin[]>
Return an asynchronous factory that resolves to the provider plugins you want the playground to register. The function runs on initial mount and again whenever a refresh is triggered from the UI.
Available provider plugins include:
Both providers support MCP tool calls, file handling, and stream responses.
- Resolve with any number of
ProviderPlugininstances. Plugins are deduplicated byidso it is safe to return cached objects. - Throw an error to surface initialization failures—users see the message alongside a retry button.
See the provider docs for details on building your own plugins.
import { ModelContextPlayground } from 'model-context-playground'
const loadProviders = async () => {
const { OpenAIProvider } = await import('model-context-playground/providers/openai')
return [new OpenAIProvider()]
}
export function App() {
return <ModelContextPlayground getProviders={loadProviders} />
}getUiPlugins?: () => Promise<UiPluginSource[] | undefined>
Return an asynchronous factory that resolves to UI plugins the playground should mount. The loader runs on initial mount and whenever the user triggers a refresh from the UI.
- Resolve with
UiPluginSourcedescriptors or inline plugin objects implementing the plugin SDK. Entries are deduplicated bymanifest.id. - Throw an error to bubble a toast and keep the registry in an error state until the user retries.
- Return
undefinedor an empty array when no UI plugins are available.
See the UI plugin guide for authoring and loading tips.
import { ModelContextPlayground } from 'model-context-playground'
const loadProviders = async () => {
const { OpenAIProvider } = await import('model-context-playground/providers/openai')
return [new OpenAIProvider()]
}
const loadUiPlugins = async () => {
const { notesPlugin } = await import('model-context-playground/plugins')
return [notesPlugin]
}
export function App() {
return (
<ModelContextPlayground
getProviders={loadProviders}
getUiPlugins={loadUiPlugins}
/>
)
}mcpServers?: Array<McpServerEntry | StoredMcpServer>
Provide an initial list of MCP servers for the playground to connect to. Supports both SSE (HTTP) and STDIO (local process) transport modes.
- Servers are matched by
id. Matching entries are updated with new configuration. - You can supply
toolPreferences,serverEnabled, and transport-specific options.
interface McpServerEntry {
id: McpServerId
options: {
serverName?: string
serverEnabled?: boolean
toolPreferences?: Record<string, 'enabled' | 'ask_first' | 'disabled'>
// SSE transport (HTTP-based)
transportMode?: 'sse'
url?: string
headers?: HeaderSource
// STDIO transport (local process - Node.js/Electron only)
transportMode?: 'stdio'
stdioConfig?: {
command: string
args?: string[]
env?: Record<string, string>
autoRestart?: boolean
}
fetchTimeoutMs?: number
}
}SSE Transport (HTTP)
For remote MCP servers accessed over HTTP with Server-Sent Events:
const mcpServers: Array<McpServerEntry> = [
{
id: 'lorem-mcp',
options: {
serverName: 'Lorem MCP',
transportMode: 'sse',
url: 'https://lorem-api.com/api/mcp',
headers: {
'Authorization': 'Bearer ...'
}
}
}
]Headers can be static key/value pairs or a function that returns headers dynamically:
const dynamicHeaders = async () => {
return {
'Authorization': await getToken()
}
}
const mcpServer = {
id: 'secure-mcp',
options: {
serverName: 'Secure MCP',
transportMode: 'sse',
url: 'https://api.example.com/mcp',
headers: dynamicHeaders
}
}STDIO Transport (Local Process)
For local MCP servers running as child processes (Node.js/Electron environments only):
const mcpServers: Array<McpServerEntry> = [
{
id: 'filesystem-mcp',
options: {
serverName: 'Filesystem MCP',
transportMode: 'stdio',
stdioConfig: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '/path/to/allowed/directory'],
env: {
DEBUG: 'mcp:*'
},
autoRestart: true
},
fetchTimeoutMs: 10_000
}
},
{
id: 'sqlite-mcp',
options: {
serverName: 'SQLite MCP',
transportMode: 'stdio',
stdioConfig: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-sqlite', '--db-path', './data.db'],
autoRestart: true
}
}
}
]Note: STDIO transport requires a Node.js environment with access to child_process and readline modules. It's automatically available in Electron apps with nodeIntegration: true and contextIsolation: false. In browser-only environments, STDIO servers will be ignored.
You can mix and match SSE and STDIO servers in the same array:
const mcpServers: Array<McpServerEntry> = [
{
id: 'remote-api',
options: {
serverName: 'Remote API',
transportMode: 'sse',
url: 'https://api.example.com/mcp'
}
},
{
id: 'local-filesystem',
options: {
serverName: 'Local Files',
transportMode: 'stdio',
stdioConfig: {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '~'],
autoRestart: true
}
}
}
]languageModels?: Array<StoredModel>
Seed the language-model catalog that powers the chat interface.
interface StoredModel {
id: string
/** Flag to mark a model as disabled without removing it. */
disabled?: boolean
/** Provider identifier (e.g. "openai", "ollama", etc.). */
provider: string
/** Display label shown in UI. */
label: string
/** Provider specific model identifier (e.g. "gpt-4.1"). */
model: string
/** Optional description or usage hint. */
description?: string
/** Base URL for the LLM endpoint (overrides provider defaults). */
baseUrl?: string
/** Arbitrary provider specific metadata (temperature, max tokens, etc). */
settings?: Record<string, unknown>
/** Provider specific connection details (API keys, base URLs, etc). */
connection?: Record<string, string | number | boolean | undefined | null | object>
clientHeaders?: Record<string, string>
logLevel?: 'off' | 'error' | 'warn' | 'info' | 'debug'
apiMode?: 'responses' | 'chat_completions'
useLegacyMessages?: boolean
}const languageModels: Array<StoredModel> = [
{
id: 'openai-gpt-5',
provider: 'openai',
model: 'gpt-5',
label: 'GPT-5',
settings: {
maxTokens: 2048,
reasoningEffort: 'low',
textVerbosity: 'low'
},
connection: {
apiKey: 'sk-...'
}
},
{
id: 'openai-gpt-4-1',
provider: 'openai',
model: 'gpt-4.1',
label: 'GPT-4.1',
settings: {
maxTokens: 2048,
temperature: 0.7,
topP: 0.9,
},
connection: {
apiKey: 'sk-...',
clientHeaders: {
'x-foo': 'bar'
}
}
}
]additionalTools?: Array<AdditionalToolDefinition>
Append extra OpenAI Agent tool definitions to the ones the playground registers by default (attachment helpers and MCP hosts). Provide the same options you would normally pass to tool({ ... }); the playground wraps each entry for you so your app does not need to install or import @openai/agents directly.
Tools participate in the same permission flow as built-in attachment helpers. Set needsApproval to true to always block execution until a user approves the call, or supply an async predicate (runContext, input, callId) => Promise<boolean> to decide dynamically. Returning true flags the tool for approval, false lets it run immediately.
const sentimentTool = {
name: 'classify_sentiment',
description: 'Return positive, neutral, or negative sentiment for the supplied text.',
parameters: {
type: 'object',
properties: {
message: { type: 'string' }
},
required: ['message'],
additionalProperties: false
},
async needsApproval(_runContext, input) {
const { message } = (input ?? {}) as { message?: string }
// Treat lengthy analyses as sensitive and require human approval.
return typeof message === 'string' && message.length > 280
},
async execute(input) {
const { message } = input as { message: string }
// Replace with your own classifier logic.
const sentiment = message.length % 2 === 0 ? 'positive' : 'neutral'
return JSON.stringify({ sentiment })
}
}
<ModelContextPlayground additionalTools={[sentimentTool]} />For TypeScript projects you can import the helper type to ensure your definitions stay in sync:
import type { AdditionalToolDefinition } from 'model-context-playground'
const extraTools: AdditionalToolDefinition[] = [sentimentTool]Any tool definitions you pass are funneled through the built-in Additional Tools manager. Users can review, enable, disable, or require approval for each tool under Settings → System → Function Tools, and their choices persist across sessions via local storage (or your custom StorageAdapter). No extra wiring is needed—tool permissions automatically flow into chat runs and the System Settings UI stays in sync.
The approval predicate shares the same signature as attachment tool approvals, so you can reuse existing helpers if you gate both tool categories with the same policy.
additionalPrompts?: AdditionalPromptDefinition[]
Inject static prompts into the slash-command palette without registering an MCP server. Each definition mirrors the metadata returned by the MCP prompt APIs and can optionally include a resolver to synthesize a PromptInstance at runtime.
interface AdditionalPromptDefinition {
/**
* Unique identifier for this prompt. If omitted, one will be generated based on the prompt name.
*/
id?: string
/**
* Optional label shown in the slash command picker. Defaults to the prompt name.
*/
title?: string
/**
* Optional description shown in the slash command picker. Defaults to the prompt description.
*/
description?: string
/**
* Optional group label displayed alongside the prompt (similar to an MCP server name).
* Defaults to "Custom prompts".
*/
groupLabel?: string
/**
* Prompt definition used to describe slash command metadata and arguments.
*/
prompt: PromptDefinition
/**
* Optional pre-resolved prompt instance to use when the slash command is invoked.
*/
promptInstance?: PromptInstance
/**
* Optional resolver invoked when the slash command is executed. Receives normalized argument values.
* When provided, the resolved prompt (if any) will be forwarded with the chat message payload.
*/
resolvePrompt?: (args: Record<string, string>) => Promise<PromptInstance | undefined> | PromptInstance | undefined
}import { ModelContextPlayground, type AdditionalPromptDefinition } from 'model-context-playground'
const customPrompts: AdditionalPromptDefinition[] = [
{
id: 'daily-standup',
groupLabel: 'Team templates',
prompt: {
name: 'daily_standup',
description: 'Capture yesterday, today, and blocker summaries.',
arguments: [
{ name: 'focus', description: 'Optional area to emphasize in your update.' }
]
},
resolvePrompt: async ({ focus }) => ({
description: 'Daily standup scaffold',
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: `Yesterday:\n- ...\n\nToday:\n- ...\n\nBlockers:\n- ...\n\nFocus: ${focus ?? 'General update'}`
}
]
}
]
})
}
]
export function App() {
return <ModelContextPlayground additionalPrompts={customPrompts} />
}loadInitialCredentials?: () => Promise<StoredModel[] | undefined>
Provide an async loader that resolves to the credential-backed models you already have stored in a secure environment (for example, disk storage managed by an Electron main process). The playground calls this once during initialization. Returned models are merged with anything in local storage and sanitized before they enter the UI.
- Resolve with the same
StoredModelobjects you would otherwise seed vialanguageModels. - Omit the prop or resolve
undefinedto skip host-managed credentials—the playground will fall back to locally persisted models. - Pair this with
onCredentialsSubmitto synchronize user edits back to your host runtime.
onCredentialsSubmit?: (payload: SanitizedCredentialUpdatePayload) => Promise<void> | void
Receive sanitized model updates whenever a user saves or deletes credentials in the settings UI. The payload exposes the action ('upsert' | 'delete'), providerId, modelId, the single affected model (or null on delete), and the full sanitized model list so hosts can persist both incremental changes and entire snapshots.
- The playground debounces submissions per provider by awaiting your handler—return a rejected promise to surface an error toast and keep the form disabled until the user retries.
- Because the payload passes through the built-in sanitizer, strings are trimmed and non-serializable values are coerced, making it safe to persist directly.
- Combine this hook with
loadInitialCredentialsto round-trip secrets between your host runtime and the React renderer without ever storing them inlocalStorage.
onReady?: () => void
Invoked once after the playground finishes loading external credentials (if configured) and the provider registry settles in either the ready or error state. Use this to hide host-level spinners, start telemetry, or focus the chat input once the UI is interactive. The callback fires exactly one time per mount even if props update or refresh cycles run again.
attachmentCallbackHandler?: AttachmentCallback
Provide a custom file-upload implementation for cached chat attachments. When supplied, the playground bypasses the inline code editor in Settings → Chat and always calls your function whenever an attachment finishes caching locally.
type AttachmentCallback = (attachment: {
id: string
name: string
mimeType: string
size: number
dataUrl: string
}) => Promise<{ sourceUrl?: string }> | { sourceUrl?: string }- Return an object with a
sourceUrlpointing to wherever you stored the file (S3, your CDN, etc.). - You can perform asynchronous work inside the handler; the composer waits for the promise to resolve before sending the message.
- Because the handler originates from React props, the in-app attachment editor is hidden to avoid conflicts.
import type { AttachmentCallback } from 'model-context-playground/contexts/ChatSettingsContext'
const uploadAttachment: AttachmentCallback = async (payload) => {
const response = await fetch('https://example.com/api/uploads', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
})
if (!response.ok) {
throw new Error('Upload failed')
}
const data = await response.json()
return { sourceUrl: data.sourceUrl }
}systemPrompt?: string
Provide explicit system instructions for the chat agent. When this prop is set, the value is passed directly to the underlying agent and the Settings → Chat prompt editor becomes read-only so users cannot override the embedded prompt.
- If you omit the prop, the playground falls back to the editable code snippet persisted in
localStorage.
storageAdapter?: StorageAdapter
Override how the playground persists state such as chat sessions, MCP servers, and theme selections. By default everything is stored in window.localStorage, but you can supply any adapter that implements the following interface (re-exported as StorageAdapter):
type StorageAdapter = {
getItem(key: string): string | null
setItem(key: string, value: string): void
removeItem(key: string): void
subscribe?(key: string, callback: (value: string | null) => void): () => void
}getItemshould return the raw serialized payload (ornull) for the specified key.setItemreceives a JSON string—the playground handles serialization before delegating to the adapter.removeItemshould delete the persisted value. Callers expectgetItemto returnnullafterward.subscribeis optional but recommended. When provided, it lets the playground react to external updates (for example, if another window or background process edits the store). Return an unsubscribe function to detach listeners.
If you need finer-grained control within your own components, the library also exposes a StorageAdapterProvider and useLocalStorage hook that accept the same adapter.
className?: string
Optional utility class string appended to the root container. Useful for sizing or theming the playground within your app layout.
theme?: ThemeSelection
Provide a complete theme object (for example defaultTheme, darkTheme, or a completely custom theme) to replace the default styling tokens.
themeOverrides?: ThemeOverrides
Pass a deep-partial theme object to merge with the default theme. This is the quickest way to tweak just a few sections (for example, swapping gradients or badge colors) while inheriting the remaining defaults.
Slash Commands
Model Context Playground supports slash commands for quickly invoking cached MCP prompts. You can also provide custom prompts via the additionalPrompts prop.
Typing / at the beginning of the chat input opens a quick command palette populated with every cached MCP prompt across your configured servers. Use the arrow keys to browse or keep typing to filter the list by prompt name, description, or server label. Press Enter (or click) to select a prompt—Model Context Playground now represents the selection as a pill that sits above the message composer.
Each pill shows the prompt name, the MCP server it came from, and its description. If the prompt defines arguments, they render as inline inputs inside the pill (required fields are flagged). Fill in whatever values you need; missing required fields are highlighted the moment you try to submit.
Need to back out? Click the × button on the pill or hit backspace while the Composer is empty to remove the slash command entirely. You can continue typing free-form text beneath the pill—the final message automatically prepends a slash-command line (for example /summarize topic="market" tone="casual") before any additional text you enter, so the prompt context is preserved when the request is sent.
If no prompts are cached yet, the palette gently reminds you to refresh or connect a server before you can use slash commands.
Theming
The library exports a small theming toolkit:
defaultTheme– the out-of-the-box lookdarkTheme– a dark, high-contrast palettesystemTheme– switches between light/dark based on OS preferencecreateTheme(overrides)– deep merges your overrides with the default theme and returns a brand-new theme objectThemeProvider– low-level provider if you embed individual playground componentsThemeTokens/ThemeOverrides– TypeScript types describing every available token
import { ModelContextPlayground, createTheme, darkTheme } from 'model-context-playground'
export function DarkApp() {
return <ModelContextPlayground theme={darkTheme} />
}
const midnightTheme = createTheme({
layout: {
appRoot: 'flex h-full flex-col bg-slate-950 font-sans text-slate-100'
},
chatHeader: {
container: 'fixed top-0 left-0 right-0 z-50 flex items-center justify-between gap-4 bg-slate-900/80 px-4 py-2 backdrop-blur'
},
message: {
cardContainer: 'space-y-3 rounded-2xl border border-slate-700 bg-slate-900/60 p-3 text-xs text-slate-100 shadow-lg ring-1 ring-slate-800/60'
},
toolCallCard: {
error: {
container: 'space-y-3 rounded-2xl border border-rose-700 bg-rose-900/40 text-sm text-rose-100 ring-1 ring-rose-800/60'
}
}
})
export function App() {
return <ModelContextPlayground theme={midnightTheme} />
}For lighter edits you can rely on themeOverrides without building a full theme:
<ModelContextPlayground
themeOverrides={{
chatDialog: {
inputCard: 'pointer-events-auto flex w-full max-w-3xl flex-col gap-3 rounded-2xl border border-emerald-500/30 bg-emerald-950/40 p-3 shadow-lg'
},
chatInput: {
submitButton: 'inline-flex items-center rounded-full bg-emerald-500 px-4 py-2 text-sm font-semibold text-emerald-50 shadow transition enabled:hover:bg-emerald-400'
}
}}
/>Refer to ThemeTokens for the complete list of customizable areas. Every token maps back to the exact UI element now used throughout the playground, making it straightforward to swap palettes or build brand-aligned skins.
Development
To run the project locally:
# clone the project
git clone https://gitlab.com/ben_goodman/libraries/model-context-playground.git
# then, in the project root:
npm install
npm startManaging Saved Settings
Model Context Playground stores chat sessions, MCP server configurations, model catalogs, and theme preferences in localStorage. You can back up or restore these settings without inspecting the browser manually:
- Export settings – Within the playground UI open Settings → System and click Export settings. A timestamped JSON file downloads containing the keys:
chat-attachment-callback-sourcechat-session-main-sessionchat-session-main-session-attachmentsllm-modelsmcp-server-optionsdashboard:theme-preference
- Import settings – Use the matching Import settings button and select a JSON export. The playground overwrites the stored values for the keys above, so reload the page afterward to ensure the UI reflects the latest configuration.
License
MIT © Benjamin Goodman
