npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

ollama-proxy-fix

v0.2.1

Published

An ollama proxy which restores the original http methods of the ollama API. Developed primarily for RunPod as it's built-in proxy strips the original http methods but can be used for any hosting service.

Readme

Ollama Proxy

A lightweight, high-performance proxy server that restores the original HTTP methods of the Ollama API. Developed primarily for RunPod environments where the built-in proxy strips original HTTP methods, but can be used with any hosting service.

🚀 Features

  • Full Ollama API Support: Proxies all native Ollama endpoints (/api/*)
  • OpenAI Compatibility: Supports OpenAI-compatible endpoints (/v1/*)
  • Streaming Support: Handles streaming responses for chat and generation endpoints
  • CORS Enabled: Built-in CORS support for cross-origin requests
  • Configurable Timeouts: Extended timeouts for long-running operations
  • Error Handling: Robust error handling with detailed logging
  • Environment Configuration: Flexible configuration via environment variables
  • TypeScript: Written in TypeScript with full type safety

📋 Supported Endpoints

Native Ollama API Endpoints

  • POST /api/chat - Chat completions
  • POST /api/generate - Text generation
  • POST /api/embeddings - Text embeddings
  • POST /api/pull - Pull models
  • POST /api/push - Push models
  • POST /api/create - Create models
  • POST /api/copy - Copy models
  • POST /api/delete - Delete models
  • POST /api/show - Show model info
  • GET /api/tags - List models
  • GET /api/ls - List models
  • POST /api/stop - Stop operations
  • GET /api/version - Get version
  • POST /api/serve - Serve models
  • POST /api/unload - Unload models

OpenAI-Compatible Endpoints

  • POST /v1/chat/completions - Chat completions
  • POST /v1/completions - Text completions
  • GET /v1/models - List models
  • POST /v1/embeddings - Text embeddings

🛠️ Installation

Using npx (Recommended)

npx ollama-proxy-fix

Using npm

npm install -g ollama-proxy-fix
ollama-proxy-fix

From Source

git clone https://github.com/Jassu225/ollama-proxy.git
cd ollama-proxy
npm install
npm run build
npm start

⚙️ Configuration

Pass env varialbles via command line or via the hosting environment to customize the proxy settings:

# Proxy Configuration
OLLAMA_PROXY_PORT=4000
OLLAMA_PROXY_REQUEST_TIMEOUT=120000 # 360000 (3 * OLLAMA_PROXY_REQUEST_TIMEOUT) for long running requests
OLLAMA_PROXY_REQUEST_BODY_LIMIT=50mb

# Ollama Server Configuration
OLLAMA_HOST=localhost
OLLAMA_PORT=11434

Environment Variables

| Variable | Default | Description | | --------------------------------- | ----------- | ------------------------------------------- | | OLLAMA_PROXY_PORT | 4000 | Port for the proxy server | | OLLAMA_PROXY_REQUEST_TIMEOUT | 120000 | Request timeout in milliseconds (2 minutes) | | OLLAMA_PROXY_REQUEST_BODY_LIMIT | 50mb | Maximum request body size | | OLLAMA_HOST | localhost | Ollama server hostname | | OLLAMA_PORT | 11434 | Ollama server port |

🚀 Usage

Basic Usage

# Start the proxy server
npx ollama-proxy-fix

# The server will start on port 4000 (or your configured port)
# Proxying requests to Ollama at localhost:11434

Development Mode

# Clone the repository
git clone https://github.com/Jassu225/ollama-proxy.git
cd ollama-proxy

# Install dependencies
npm install

# Start in development mode with hot reload
npm run dev

# Build for production
npm run build

# Start production server
npm start

Testing the Proxy

Once the proxy is running, you can test it:

# Check if the proxy is running
curl http://localhost:4000

# Response:
# {
#   "status": "running",
#   "message": "Ollama Proxy Server is running!",
#   "timestamp": "2025-07-28T06:37:21.249Z"
# }

Example API Calls

Native Ollama API

# Chat completion
curl -X POST http://localhost:4000/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama2",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

# List models
curl http://localhost:4000/api/tags

OpenAI-Compatible API

# Chat completion
curl -X POST http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama2",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

# List models
curl http://localhost:4000/v1/models

🔧 Advanced Features

Streaming Support

The proxy automatically handles streaming responses when stream: true is set in the request body:

curl -X POST http://localhost:4000/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama2",
    "messages": [{"role": "user", "content": "Tell me a story"}],
    "stream": true
  }'

Extended Timeouts

Long-running operations (pull, push, create, show) automatically get extended timeouts (3x the normal timeout) to handle large model operations.

CORS Support

The proxy includes built-in CORS headers for cross-origin requests:

  • Access-Control-Allow-Origin: *
  • Access-Control-Allow-Methods: GET, POST, OPTIONS
  • Access-Control-Allow-Headers: Content-Type, Authorization

🐛 Troubleshooting

Common Issues

  1. Connection Refused (503)

    • Ensure Ollama is running on the configured host and port
    • Check if OLLAMA_HOST and OLLAMA_PORT are correct
  2. Request Timeout (504)

    • Increase OLLAMA_PROXY_REQUEST_TIMEOUT for long-running operations
    • Check network connectivity to Ollama server
  3. Invalid JSON Body (400)

    • Ensure request body is valid JSON
    • Check Content-Type header is set to application/json

📊 Performance

  • Low Latency: Direct proxy with minimal overhead
  • Memory Efficient: Streams responses without buffering
  • Scalable: Handles multiple concurrent requests
  • Reliable: Robust error handling and recovery

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📝 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Built for RunPod environments where HTTP methods are stripped
  • Compatible with any Ollama hosting service
  • Inspired by the need for proper HTTP method preservation in proxy environments

📞 Support


Made with ❤️ for the Ollama community