openai-ollama
v0.1.0-beta.5
Published
Create a local Ollama proxy service for the OpenAI compatible backend
Downloads
12
Readme
openai-ollama
Create a local Ollama proxy service for the OpenAI compatible backend. This allows you to integrate your OpenAI backend in BYOK mode as an Ollama backend with other applications, such as VSCode GitHub Copilot.
Usage
pnpm i -g openai-ollama
openai-ollamaOr just
pnpm dlx openai-ollamaConfiguration
The recommended way is to configure OpenAI compatible backend and server via configuration files.
{
"baseURL": "<YOUR_BASE_URL>",
"apiKey": "<YOUR_API_KEY>",
"models": [
{
"id": "<MODEL_ID>",
"name": "<MODEL_NAME>"
}
]
}Then just pass the file argument to the CLI:
openai-ollama --config-file=/path/to/config.jsonAlternatively, you can configure most options through environment variables, or configure a few options through command line arguments. When used this way, command line arguments have a higher priority than configuration files, while environment variables always have the lowest priority.
Supported options are as follows:
apiKey: (Required) API key for OpenAI compatible backends. Defaults toOPENAI_API_KEYenvironment variable.baseURL: Base URL for OpenAI compatible backends. Defaults toOPENAI_API_BASEorOPENAI_BASE_URLenvironment variable, orhttps://api.openai.com/v1models: Specifies the list of available models. The list of models will be obtained through the OpenAI compatible API (/models) if not specified, which is not supported by some backends.- Elements in
modelsare objects with the following properties, or theidproperty of the object.id: Unique ID of the model.name: Display name of the model.
- Elements in
portor--port: The port the server listens on. Defaults toPORTenvironment variable or11434.
