@antseed/provider-local-llm
v0.1.21
Published
Local LLM provider plugin for Antseed
Readme
@antseed/provider-local-llm
Provide local LLM capacity on the AntSeed P2P network. Works with Ollama, llama.cpp, and any OpenAI-compatible local server.
Installation
antseed plugin add @antseed/provider-local-llmUsage
# With Ollama (default)
antseed seed --provider local-llm
# With a custom endpoint
export LOCAL_LLM_BASE_URL=http://localhost:8080
antseed seed --provider local-llmConfiguration
| Key | Type | Required | Default | Description |
|-----|------|----------|---------|-------------|
| LOCAL_LLM_BASE_URL | string | No | http://localhost:11434 | Local LLM server URL |
| LOCAL_LLM_API_KEY | secret | No | -- | Optional API key for local server |
| ANTSEED_INPUT_USD_PER_MILLION | number | No | 0 | Input token price (USD per 1M) |
| ANTSEED_OUTPUT_USD_PER_MILLION | number | No | 0 | Output token price (USD per 1M) |
| ANTSEED_MAX_CONCURRENCY | number | No | 1 | Max concurrent requests |
| ANTSEED_ALLOWED_SERVICES | string[] | No | -- | Comma-separated service allowlist |
How It Works
Relays requests to a local LLM server. Pricing defaults to 0 (free) since you're running your own hardware. Concurrency defaults to 1 to avoid overloading local inference.
