siliconflow-hunyuan-mt-mcp
v0.1.0
Published
MCP translation server using SiliconFlow Hunyuan model
Downloads
9
Readme
SiliconFlow Hunyuan Translation MCP Server
A standalone MCP (Model Context Protocol) server for text translation using the SiliconFlow API with the Hunyuan-MT-7B model. This server exposes a single translate tool that can be used by any MCP-compatible client.
Features
- Standalone MCP Server: Works with any MCP-compatible client
- SiliconFlow Integration: Pre-configured for SiliconFlow with Hunyuan-MT-7B
- OpenAI-Compatible: Can be used with any OpenAI-compatible API endpoint
- Configurable: Supports both config files and environment variables
- Customizable Prompts: Override the default translation prompt template
Installation
Prerequisites
- Node.js 18 or higher
- An API key from SiliconFlow or another OpenAI-compatible provider
Install from Source
# Clone or download the repository
cd siliconflow-hunyuan-mt-mcp
# Install dependencies
npm install
# Build the TypeScript source
npm run buildNPM Package Usage
The package is published to npm as siliconflow-hunyuan-mt-mcp. You can run it directly without cloning or installing locally.
Run with npx (Recommended)
npx -y siliconflow-hunyuan-mt-mcpThe -y flag automatically accepts the prompt to install if the package is not already cached.
Global Installation
npm install -g siliconflow-hunyuan-mt-mcp
siliconflow-hunyuan-mt-mcpConfiguration
The server can be configured via environment variables or a JSON config file. Configuration is loaded in this priority order (later overrides earlier):
- Built-in defaults
- Config file (if specified)
- Environment variables
Configuration Options
| Option | Environment Variable | Default | Description |
|--------|---------------------|---------|-------------|
| provider | SILICONFLOW_HUNYUAN_MT_PROVIDER | siliconflow | Provider identifier |
| baseUrl | SILICONFLOW_HUNYUAN_MT_BASE_URL | https://api.siliconflow.cn/v1 | API base URL |
| apiKeyEnvName | SILICONFLOW_HUNYUAN_MT_API_KEY_ENV_NAME | SILICONFLOW_API_KEY | Environment variable name containing the API key |
| model | SILICONFLOW_HUNYUAN_MT_MODEL | tencent/Hunyuan-MT-7B | Model identifier |
| promptTemplate | SILICONFLOW_HUNYUAN_MT_PROMPT_TEMPLATE | See below | Custom prompt template |
Config File
Set the config file path via:
SILICONFLOW_HUNYUAN_MT_CONFIG_PATHSILICONFLOW_HUNYUAN_MT_CONFIG
Example config.json:
{
"provider": "siliconflow",
"baseUrl": "https://api.siliconflow.cn/v1",
"apiKeyEnvName": "SILICONFLOW_API_KEY",
"model": "tencent/Hunyuan-MT-7B"
}Default Prompt Template
The default prompt template is:
You are a precise translation engine. Translate the given text from {{source_lang}} to {{target_lang}}.
Return only the translated text, with no extra commentary.
Text:
{{text}}You can customize this by setting promptTemplate. Use {{source_lang}}, {{target_lang}}, and {{text}} as placeholders.
Usage Examples
SiliconFlow + Hunyuan-MT-7B (Recommended)
This is the primary use case. Set your SiliconFlow API key:
export SILICONFLOW_API_KEY="your-api-key-here"Then start the server:
npm startOr use the binary directly:
./dist/index.jsCustom OpenAI-Compatible Provider
To use a different provider (for example, OpenAI, Azure, or a local LLM server):
Option 1: Environment Variables
export SILICONFLOW_HUNYUAN_MT_BASE_URL="https://api.openai.com/v1"
export SILICONFLOW_HUNYUAN_MT_API_KEY_ENV_NAME="OPENAI_API_KEY"
export SILICONFLOW_HUNYUAN_MT_MODEL="gpt-4"
export OPENAI_API_KEY="your-openai-key"
npm startOption 2: Config File
Create config.json:
{
"baseUrl": "https://api.openai.com/v1",
"apiKeyEnvName": "OPENAI_API_KEY",
"model": "gpt-4"
}Then:
export SILICONFLOW_HUNYUAN_MT_CONFIG_PATH="./config.json"
export OPENAI_API_KEY="your-openai-key"
npm startLocal LLM Server (e.g., Ollama, LM Studio)
{
"baseUrl": "http://localhost:11434/v1",
"apiKeyEnvName": "LOCAL_API_KEY",
"model": "llama3"
}Note: Some local servers do not require an API key. You can set apiKeyEnvName to any dummy variable.
MCP Tool Schema
The server exposes one tool: translate
Tool: translate
Translate text from one language to another.
Parameters:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| text | string | Yes | The text to translate |
| source_lang | string | Yes | Source language code (e.g., 'en', 'zh', 'auto') |
| target_lang | string | Yes | Target language code (e.g., 'en', 'zh', 'ja') |
Example Tool Call:
{
"name": "translate",
"arguments": {
"text": "Hello, world!",
"source_lang": "en",
"target_lang": "zh"
}
}Supported Language Codes:
Common language codes include:
en- Englishzh- Chineseja- Japaneseko- Koreanfr- Frenchde- Germanes- Spanishru- Russianauto- Auto-detect source language (if supported by model)
MCP Client Configuration
Add this server to your MCP client configuration:
Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or the equivalent on your platform:
{
"mcpServers": {
"siliconflow-translate": {
"command": "npx",
"args": ["-y", "siliconflow-hunyuan-mt-mcp"],
"env": {
"SILICONFLOW_API_KEY": "your-api-key-here"
}
}
}
}Cursor
Add to your Cursor MCP settings:
{
"mcpServers": {
"siliconflow-translate": {
"command": "npx",
"args": ["-y", "siliconflow-hunyuan-mt-mcp"],
"env": {
"SILICONFLOW_API_KEY": "your-api-key-here"
}
}
}
}Generic MCP Client
Any MCP client that supports stdio transport can use:
{
"name": "siliconflow-translate",
"transport": "stdio",
"command": "npx",
"args": ["-y", "@chenhan/siliconflow-hunyuan-mt-mcp"]
}Testing
Run the test suite:
npm testThis runs vitest on the test files. Tests cover configuration loading, prompt interpolation, and tool execution.
Development
Build
npm run buildWatch Mode (Rebuild on Change)
npm run watchTroubleshooting
Server Won't Start
- Check that you have set the required API key environment variable
- Verify the
baseUrlis correct and accessible - Check the config file path is correct (if using one)
Translation Fails
- Verify your API key has sufficient credits/quota
- Check the model name is correct for your provider
- Ensure the language codes are supported by the model
Debug Output
Set NODE_ENV=development for additional logging (if implemented in your version).
License
MIT
