@xpert-ai/plugin-volcengine
v0.0.1
Published
## Introduction
Readme
Xpert Plugin: Volcengine
Introduction
@xpert-ai/plugin-volcengine is a standard adapter for the XpertAI platform to access Volcengine (Doubao, etc.) large model services. The plugin connects to Volc Ark's OpenAI-compatible API, providing a unified entry point for agents to access capabilities such as conversation and function calling within a workflow.
Core Features
- Integrates Volcengine Ark platform's LLM/OpenAI-compatible interface, supporting key authentication, region selection, and multi-tenant configuration.
- Registers the
VolcenginePluginNestJS module, automatically wiring up model providers, lifecycle logging, and configuration validation logic for easy enablement in XpertAI applications. - Provides
VolcengineLargeLanguageModel, encapsulating Doubao-like conversational models with support for function calling, streaming output, token counting, and chain-of-thought exposure.
Supported Model Types
- Conversational Models: Function calling, tool calling, streaming output, and chain-of-thought reasoning.
Installation
npm install @xpert-ai/plugin-volcenginePeer Dependencies: The host project must also provide libraries such as
@xpert-ai/plugin-sdk,@nestjs/common,@metad/contracts,@langchain/openai,lodash-es,chalk, andzod. Please refer topackage.jsonfor specific versions.
Enabling in XpertAI
- Install the plugin in the project where the XpertAI service runs, and ensure Node.js can resolve the package.
- Before starting the service, declare the plugin via environment variables:
PLUGINS=@xpert-ai/plugin-volcengine - In the XpertAI console or configuration file, add a new model provider, select
volcengine, and fill in the corresponding Ark/Doubao model configuration.
Configuration
The configuration form is defined by volcengine.yaml, covering common Ark model deployment scenarios:
| Field | Description |
| --- | --- |
| api_key | Required. Access key for Volcengine Ark platform (temporary token generated by AK/SK or long-term token). |
| endpoint_url | Required. Base URL for Volc Ark's OpenAI-compatible API, e.g., https://ark.cn-beijing.volces.com/api/v3. |
| endpoint_model_name | Optional. If the server model name differs from the logical model name in XpertAI, override it here. |
| mode | Choose chat or completion, corresponding to different Ark inference channels. |
| context_size / max_tokens_to_sample | Controls context window and generation length; recommended to match Doubao model specs. |
| agent_thought_support, function_calling_type, stream_function_calling, vision_support | Configure model capability tags for frontend display and agent strategy selection. |
| region / workspace_id | Optional. Additional info for multi-region or multi-tenant Volc Ark deployments. |
| stream_mode_delimiter | Custom delimiter for streaming output, useful for rendering and rich text processing. |
After saving, the plugin will call validateCredentials to send a minimal verification request to the Volc Ark API, ensuring the key and endpoint are correct.
Development & Debugging
In the repository root, enter xpertai/ and use Nx commands to build and test:
npx nx build @xpert-ai/plugin-volcengine
npx nx test @xpert-ai/plugin-volcengineBuild artifacts are output to the dist/ directory by default. Unit test configuration is in jest.config.ts; you can extend coverage as needed.
License
This project follows the AGPL-3.0 License in the repository root.
