@lucifergene/plugin-mcp-chat-backend
v0.3.2
Published
Welcome to the MCP (Model Context Protocol) Chat plugin for Backstage! This plugin enables you to integrate AI-powered chat capabilities into your Backstage platform, supporting multiple AI providers and MCP servers.
Maintainers
Readme
MCP Chat for Backstage
Welcome to the MCP (Model Context Protocol) Chat plugin for Backstage! This plugin enables you to integrate AI-powered chat capabilities into your Backstage platform, supporting multiple AI providers and MCP servers.
Overview
The MCP Chat plugin brings conversational AI capabilities directly into your Backstage environment. It leverages the Model Context Protocol to connect with various AI providers and external tools, enabling developers to interact with their infrastructure, catalogs, and external services through natural language.
Features
- 🤖 Multi-Provider AI Support: Works with OpenAI, Claude, Gemini, Ollama, LiteLLM, and OpenAI Responses API
- 🔧 Multi-Server Support: Connect multiple MCP servers (STDIO, SSE, Streamable HTTP)
- 🛠️ Tool Management: Browse and dynamically enable/disable tools from connected MCP servers
- 💬 Rich Chat Interface: Beautiful, responsive chat UI with markdown support
- ⚡ Quick Setup: Configurable QuickStart prompts for common use cases
Supported AI Providers
The following AI providers and models have been thoroughly tested:
| Provider | Model | Status | Notes |
| ------------------------ | ------------------ | --------------- | ------------------------------------------------------------- |
| OpenAI | gpt-4o-mini | ✅ Fully Tested | Recommended for production use |
| OpenAI Responses API | Various | ✅ Tested | Handles MCP tool execution internally (see below) |
| Gemini | gemini-2.5-flash | ✅ Fully Tested | Excellent performance with tool calling |
| Ollama | llama3.1:8b | ✅ Tested | Works well, but llama3.1:30b recommended for better results |
| LiteLLM | Various | ✅ Tested | Proxy for 100+ LLMs with unified API interface |
Note: While other providers and models may work, they have not been extensively tested. The plugin supports any provider that implements tool calling functionality, but compatibility is not guaranteed for untested configurations.
OpenAI Responses API Provider
The OpenAI Responses API provider is a special provider type that delegates MCP tool discovery and execution to the API itself, rather than handling tools locally. This is useful when:
- You have a centralized API gateway that manages MCP servers
- You want to offload tool execution to a remote service
- Your MCP servers are only accessible from a specific network/environment
Key Differences from Standard Providers:
- Tool Execution: The API handles all MCP tool calls internally
- MCP Server Requirements: Only URL-based MCP servers are supported (no STDIO/npxCommand)
- Configuration: MCP server configs are sent to the API in each request
- UI Experience: The chat UI displays tool outputs identically to standard providers
Example Configuration:
mcpChat:
providers:
- id: openai-responses
baseUrl: 'http://gemini-mcp-servers.apps.example.com/v1/openai/v1'
model: 'gemini/models/gemini-2.5-flash'
token: 'your-api-token' # Optional
mcpServers:
- id: k8s
name: Kubernetes Server
url: 'https://kubernetes-mcp-server.example.com/mcp'
type: streamable-http
- id: brave-search
name: Brave Search
url: 'https://brave-search-mcp.example.com/mcp'
type: streamable-httpAuthorization Headers Support:
The Responses API provider supports passing authorization headers to MCP servers that require authentication. Headers configured in your MCP server config are automatically forwarded to the API:
mcpServers:
- id: github-copilot
name: GitHub Copilot MCP
url: 'https://api.githubcopilot.com/mcp'
type: streamable-http
headers:
Authorization: 'Bearer ghp_your_github_token_here'
- id: backstage-server
name: Backstage MCP Server
url: 'http://localhost:7007/api/mcp-actions/v1'
type: streamable-http
headers:
Authorization: 'Bearer your_backstage_token'
X-Custom-Header: 'custom-value'The headers are included in the Responses API request for each server:
{
"tools": [
{
"type": "mcp",
"server_url": "https://api.githubcopilot.com/mcp",
"server_label": "github-copilot",
"require_approval": "never",
"headers": {
"Authorization": "Bearer ghp_your_github_token_here"
}
}
]
}Important Notes:
- The
baseUrlmust point to a Responses API compatible endpoint - MCP servers must be configured with
url(STDIO servers will be ignored) - Headers are optional - servers without headers work normally
- Multiple custom headers can be specified per server
Quick Start with Gemini (Free)
To quickly test this plugin, we recommend using Gemini's free API:
Visit Google AI Studio: Go to https://aistudio.google.com
Sign in: Use your Google account to sign in
Create API Key:
- Click on "Get API key" in the left sidebar
- Click "Create API key in new project" (or select an existing project)
- Copy the generated API key
Set Environment Variable:
export GEMINI_API_KEY="your-api-key-here"
💡 Tip: Gemini offers a generous free tier that's perfect for testing and development with the MCP Chat.
Screenshots
Prerequisites
- Backstage v1.20+ (for new backend system support)
- Backstage v1.40+ (if installing Backstage MCP server in the same instance)
- Node.js 18+
- One or more AI provider API keys (OpenAI, Gemini, etc.)
- (Optional) MCP server dependencies
Installation
This plugin consists of two packages:
@backstage-community/plugin-mcp-chat- Frontend plugin@backstage-community/plugin-mcp-chat-backend- Backend plugin
Backend Installation
Install the backend plugin:
# From your Backstage root directory yarn --cwd packages/backend add @backstage-community/plugin-mcp-chat-backendAdd to your backend:
// In packages/backend/src/index.ts const backend = createBackend(); // ... other plugins backend.add(import('@backstage-community/plugin-mcp-chat-backend'));
Frontend Installation
Install the frontend plugin:
# From your Backstage root directory yarn --cwd packages/app add @backstage-community/plugin-mcp-chatAdd to your app:
For the classic frontend system:
// In packages/app/src/App.tsx import { McpChatPage } from '@backstage-community/plugin-mcp-chat'; // Add to your routes <Route path="/mcp-chat" element={<McpChatPage />} />;Add navigation:
// In packages/app/src/components/Root/Root.tsx import { MCPChatIcon } from '@backstage-community/plugin-mcp-chat'; // In your sidebar items <SidebarItem icon={MCPChatIcon} to="mcp-chat" text="MCP Chat" />;
Configuration
Add the following configuration to your app-config.yaml:
mcpChat:
# Configure AI providers (currently only the first provider is used)
# Supported Providers: OpenAI, OpenAI Responses API, Gemini, Claude, Ollama, and LiteLLM
providers:
- id: openai # OpenAI provider
token: ${OPENAI_API_KEY}
model: gpt-4o-mini # or gpt-4, gpt-3.5-turbo, etc.
- id: openai-responses # OpenAI Responses API provider (handles MCP internally)
baseUrl: 'http://your-responses-api-endpoint.com/v1/openai/v1'
model: 'gemini/models/gemini-2.5-flash'
token: ${API_TOKEN} # Optional, depends on your API setup
- id: claude # Claude provider
token: ${CLAUDE_API_KEY}
model: claude-sonnet-4-20250514 # or claude-3-7-sonnet-latest
- id: gemini # Gemini provider
token: ${GEMINI_API_KEY}
model: gemini-2.5-flash # or gemini-2.0-pro, etc.
- id: ollama # Ollama provider
baseUrl: 'http://localhost:11434'
model: llama3.1:8b # or any model you have locally
- id: litellm # LiteLLM proxy provider
baseUrl: 'http://localhost:4000' # LiteLLM proxy URL
token: ${LITELLM_API_KEY} # Optional, depends on your LiteLLM setup
model: gpt-4o-mini # Model name configured in LiteLLM
# Configure MCP servers
mcpServers:
# Brave Search for web searching
- id: brave-search-server
name: Brave Search Server
npxCommand: '@modelcontextprotocol/server-brave-search@latest'
env:
BRAVE_API_KEY: ${BRAVE_API_KEY}
# Kubernetes server for K8s operations
- id: kubernetes-server
name: Kubernetes Server
npxCommand: 'kubernetes-mcp-server@latest'
env:
KUBECONFIG: ${KUBECONFIG}
# Backstage server integration (with authorization headers)
- id: backstage-server
name: Backstage Server
url: 'http://localhost:7007/api/mcp-actions/v1'
headers:
Authorization: 'Bearer ${BACKSTAGE_MCP_TOKEN}'
# GitHub Copilot MCP (requires authentication)
- id: github-copilot
name: GitHub Copilot MCP
url: 'https://api.githubcopilot.com/mcp'
headers:
Authorization: 'Bearer ${GITHUB_TOKEN}'
# Optional: Customize the system prompt for the AI assistant
# If not specified, uses a default prompt optimized for tool usage
systemPrompt: "You are a helpful assistant. When using tools, provide a clear, readable summary of the results rather than showing raw data. Focus on answering the user's question with the information gathered."
# Configure quick prompts
quickPrompts:
- title: 'Search Latest Tech News'
description: 'Find the latest technology news and developments'
prompt: 'Search for the latest developments in Model Context Protocol and its applications'
category: Research
- title: 'Kubernetes Health Check'
description: 'Check the health of Kubernetes clusters'
prompt: 'Show me the current Kubernetes deployments, pods status, and resource utilization in a nicely formatted text with bullet points'
category: Infrastructure
- title: 'Backstage Catalog Query'
description: 'Query the Backstage software catalog'
prompt: 'Describe the "example-app" microservice in our Backstage catalog'
category: CatalogSystem Prompt Configuration
The systemPrompt configuration allows you to customize the AI assistant's behavior and personality. This optional setting controls how the assistant responds and approaches tasks.
Default Behavior: If not specified, the plugin uses this default prompt:
You are a helpful assistant. When using tools, provide a clear, readable summary of the results rather than showing raw data. Focus on answering the user's question with the information gathered.Custom Examples:
# Concise and technical
systemPrompt: 'You are a technical assistant. Provide concise, actionable responses.'
# Domain-specific expertise
systemPrompt: 'You are a Kubernetes expert. When answering questions, prioritize best practices for cloud-native deployments and provide specific kubectl commands when helpful.'
# Security-focused
systemPrompt: 'You are a security-focused DevOps assistant. Always consider security implications and suggest secure alternatives when applicable.'Tips:
- Keep prompts focused and clear
- Mention specific domains or expertise when relevant
- Include instructions about response format if needed
- The system prompt affects all AI interactions in the plugin
For more advanced MCP server configuration examples (including STDIO, Streamable HTTP, SSE, custom scripts, and arguments), see SERVER_CONFIGURATION.
Environment Variables
Set the following environment variables in your Backstage deployment:
# AI Provider API Keys
export OPENAI_API_KEY="sk-..."
export GEMINI_API_KEY="..."
export LITELLM_API_KEY="sk-..." # Optional, for LiteLLM proxy authentication
# MCP Server Configuration
export BRAVE_API_KEY="..."
export BACKSTAGE_MCP_TOKEN="..."
export GITHUB_TOKEN="ghp_..." # For GitHub Copilot MCP or other GitHub integrations
export KUBECONFIG="/path/to/your/kubeconfig.yaml"Usage
Navigate to the Plugin: Go to the MCP Chat page in your Backstage instance
Access Configuration: Expand the Configuration sidebar on the right to view:
- Provider connectivity status
- Connected MCP servers and their available tools
- Tool management controls for enabling/disabling specific servers
Start Chatting: Begin a conversation by:
- Selecting from the provided quick prompts, or
- Typing your own queries directly into the chat input field
Example Queries
| Query | MCP Server Required | Purpose | | ------------------------------------------------------------------ | ------------------- | ------------------------------- | | "Search for the latest news about Kubernetes security" | Brave Search | Find relevant articles and news | | "Show me all pods in the default namespace" | Kubernetes | Query cluster resources | | "Describe the "example-app" microservice in our Backstage catalog" | Backstage | Access catalog entity |
Development
Local Development Setup
Clone the repository:
git clone https://github.com/backstage/community-plugins.git cd workspaces/mcp-chatInstall dependencies:
yarn installStart the development server:
yarn startAccess the plugin: Navigate to http://localhost:3000/mcp-chat
Testing
Run the test suite:
# Run all tests
yarn test:all
# Run tests in watch mode
yarn test --watchBuilding
Build all packages:
yarn build:allTroubleshooting
Common Issues
AI Provider Shows as Disconnected
- Cause: Missing or invalid API keys
- Solution:
- Verify API keys are set as environment variables
- Check provider configuration in
app-config.yaml - Ensure the specified model is available for your API key
Tools Are Not Being Called
- Cause: AI provider doesn't support tool calling or model limitations
- Solution:
- Ensure your AI provider supports tool calling
- For Ollama, use larger models like
llama3.1:30bfor better results - Verify MCP server API keys are correctly configured
- Check backend logs for connection errors
MCP Servers Not Connecting
- Cause: Missing dependencies or configuration issues
- Solution:
- Verify all required environment variables are set
- Check MCP server logs for connection errors
- Ensure MCP server dependencies are installed
Debug Endpoints
Use these endpoints for debugging:
- Provider Status:
/api/mcp-chat/provider/status - MCP Server Status:
/api/mcp-chat/mcp/status - Available Tools:
/api/mcp-chat/tools
API Reference
Backend Endpoints
| Endpoint | Method | Description |
| ------------------------------- | ------ | ------------------------------------- |
| /api/mcp-chat/chat | POST | Send chat messages |
| /api/mcp-chat/provider/status | GET | Get status of connected AI provider |
| /api/mcp-chat/mcp/status | GET | Get status of connected MCP servers |
| /api/mcp-chat/tools | GET | List available MCP tools from servers |
Using as a Library
This plugin can be used as a reusable library in your own Backstage backend plugins. Instead of building your own LLM integration, you can import the provider system, MCP client service, and utilities directly.
Quick Example
import {
ProviderFactory,
getProviderConfig,
MCPClientServiceImpl,
type ChatMessage,
} from '@backstage-community/plugin-mcp-chat-backend';
// Create an LLM provider from config
const providerConfig = getProviderConfig(config);
const provider = ProviderFactory.createProvider(providerConfig);
// Or use the full MCP service with tool support
const mcpService = new MCPClientServiceImpl({ logger, config });
await mcpService.initializeMCPServers();
const result = await mcpService.processQuery([
{ role: 'user', content: 'List all pods in default namespace' },
]);What's Exported
| Category | Exports |
| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Providers | LLMProvider, ProviderFactory, OpenAIProvider, ClaudeProvider, GeminiProvider, OllamaProvider, LiteLLMProvider, OpenAIResponsesProvider |
| Services | MCPClientService, MCPClientServiceImpl |
| Types | ChatMessage, ChatResponse, ProviderConfig, ServerConfig, Tool, ToolCall, and more |
| Utilities | validateConfig, validateMessages, loadServerConfigs, executeToolCall |
| Router | createRouter - reuse the standard API endpoints |
Full Documentation
For comprehensive API documentation, usage examples, and integration patterns, see USAGE.md.
Contributing
Please see our Contributing Guidelines for detailed information.
Development Guidelines
- Follow the existing code style and patterns
- Add tests for new functionality
- Update documentation as needed
- Ensure all tests pass before submitting
Support and Community
- Issues: Create an issue
- Discord: Join our Discord
- Documentation: Backstage Documentation
- Community: Backstage Community
Changelog
See CHANGELOG.md for details about changes in each version.
License
This plugin is licensed under the Apache 2.0 License. See LICENSE for details.
Made with ❤️ for the Backstage Community
