@sski/link-ai
v1.0.1
Published
Run an LLM on one machine, access it from anywhere. CLI + GUI with Tailscale support.
Downloads
142
Maintainers
Readme
link-ai
Run an LLM on one machine. Access it from anywhere.
An open-source alternative to LM Studio's LM Link. Run your local LLM on a server and access it from any machine via CLI or web GUI.
What is this?
You have a powerful machine running an LLM (via Ollama, vLLM, llama.cpp, etc.). You want to use it from your laptop, your phone, or another server — without exposing ports to the internet.
link-ai is a lightweight proxy + client that makes this easy.
┌──────────────────────────┐ ┌──────────────────────┐
│ Server (anywhere) │ │ Your machine │
│ │ │ │
│ Ollama / vLLM / etc │ │ link-ai chat │
│ link-ai host :8080 │ ◄─────────► │ link-ai gui │
│ │ internet │ link-ai dashboard │
└──────────────────────────┘ └──────────────────────┘Works across countries. Tested from Germany to Poland.
Quick Start
Install
npm install -g @sski/link-aiOn the machine running the LLM
# If using Ollama
link-ai host --upstream http://localhost:11434 --port 8080 --all-interfaces
# If using vLLM
link-ai host --upstream http://localhost:8000 --port 8080 --all-interfaces
# With Tailscale (recommended for remote access)
link-ai host --upstream http://localhost:11434 --port 8080 --all-interfaces --tailscaleOn your local machine
# Chat via CLI
link-ai chat 192.168.1.100:8080 -m "hello"
link-ai chat 192.168.1.100:8080 -i # interactive mode
# Open web GUI
link-ai gui 192.168.1.100:8080
# Open dashboard (manage multiple hosts)
link-ai dashboard
# Test connection
link-ai connect 192.168.1.100:8080
# List models
link-ai models 192.168.1.100:8080Why link-ai?
You can already access remote LLMs via SSH or RDP. So why another tool?
| | SSH + Ollama | RDP + LM Studio | LM Studio Link | link-ai |
|---|:---:|:---:|:---:|:---:|
| Setup time | 5 min | 30 min | Waitlist | npm i -g |
| Chat UI | Terminal only | Full desktop GUI | GUI | ChatGPT-style web GUI |
| Phone/tablet | Painful | Laggy | Yes | Yes, just a browser |
| Multiple hosts | Manual | One at a time | Limited | Dashboard with status |
| Bandwidth | Low | High (streaming desktop) | Medium | Low (just API calls) |
| Headless server | Yes | Needs desktop env | Yes | Yes |
| Open source | Yes | N/A | No | Yes |
| Auto model detect | No | No | Yes | Yes |
| Capability detection | No | No | No | Yes |
| Web search | No | No | No | Yes |
| File upload | No | No | No | Yes |
| Works across countries | Yes (if port open) | Yes (if port open) | Yes (Tailscale) | Yes (Tailscale) |
| Security | Key-based | Password | Tailscale | Tailscale + API key |
The bottom line:
- SSH = for sysadmins who live in terminal
- RDP = for when you need the full desktop (overkill for LLM chat)
- LM Studio Link = polished but closed source, waitlist-gated
- link-ai = open source, instant setup, works on any device, purpose-built for LLM access
link-ai isn't trying to replace SSH or RDP. It's the right tool for one specific job — chatting with a remote LLM from any device, with a nice UI, without the overhead.
Features
- OpenAI-compatible proxy — works with Ollama, vLLM, llama.cpp, LocalAI, etc.
- Streaming responses — real-time token-by-token output via SSE
- CLI chat — one-shot messages or interactive conversation with history
- Web GUI — ChatGPT-style browser interface with model selector
- Dashboard — manage multiple hosts, see status, latency, models
- Auto model detection — no need to specify model names
- Optional auth —
--auth <key>to protect your host - Tailscale support — show Tailscale IPs, list peers
- Works across countries — tested from Germany to Poland (105ms latency)
Commands
| Command | Description |
|---------|-------------|
| link-ai host | Start proxy server for your LLM |
| link-ai chat <address> | Chat with a remote LLM (CLI) |
| link-ai gui <address> | Open chat GUI in browser |
| link-ai dashboard | Host management dashboard |
| link-ai connect <address> | Test connection to a host |
| link-ai models <address> | List available models |
| link-ai tailscale status | Show Tailscale network status |
| link-ai tailscale ip | Show this machine's Tailscale IP |
| link-ai tailscale peers | List Tailscale peers |
Host Options
link-ai host [options]
-u, --upstream <url> Upstream LLM API URL (required)
-p, --port <number> Port to listen on (default: 8080)
-a, --auth <key> API key for authentication
-t, --tailscale Show Tailscale IP
--all-interfaces Listen on 0.0.0.0 (default: 127.0.0.1)Security
- No ports exposed to the internet — use Tailscale or SSH tunnel
- Optional API key auth —
--auth <key>on host, same key on client - CORS enabled — works from browser GUIs
- Your data stays on your machines — link-ai doesn't store or send anything
Using with Tailscale
Tailscale gives your machines stable IPs that work from anywhere, without port forwarding.
# Install Tailscale on both machines
# https://tailscale.com/download
# On both machines
tailscale up
# Get your Tailscale IP
link-ai tailscale ip
# List connected machines
link-ai tailscale peers
# Start host with Tailscale
link-ai host --upstream http://localhost:11434 --all-interfaces --tailscale
# Connect from anywhere
link-ai chat 100.64.0.5:8080 -m "hello from across the world"Dashboard
The dashboard lets you manage multiple LLM hosts from one place.
link-ai dashboard --port 9090Then open http://localhost:9090/dashboard.
Features:
- Add/remove hosts with display names
- See online/offline status, latency, available models
- Auto-refresh every 10 seconds
- Click to open chat with any host
- Dark/light theme
Architecture
link-ai/
├── bin/link-ai.js # CLI entry point
├── src/
│ ├── host/
│ │ ├── server.js # Proxy server (HTTP + WebSocket)
│ │ └── tailscale.js # Tailscale CLI integration
│ ├── client/
│ │ ├── api-client.js # HTTP client + connect/models
│ │ ├── chat-cli.js # Interactive CLI chat
│ │ └── gui-client.js # Web GUI server
│ ├── web/
│ │ ├── dashboard-server.js # Dashboard server
│ │ └── public/ # Static frontend files
│ └── shared/
│ ├── config.js # Config parsing
│ └── utils.js # UtilitiesSupported Backends
Any LLM server with an OpenAI-compatible API:
- Ollama —
http://localhost:11434 - vLLM —
http://localhost:8000 - llama.cpp server —
http://localhost:8080 - LocalAI —
http://localhost:8080 - text-generation-webui —
http://localhost:5000 - Any OpenAI-compatible API
Development
git clone https://github.com/yourusername/link-ai.git
cd link-ai
npm install
node bin/link-ai.js --helpLicense
MIT
Install · Commands · Dashboard · Security
Built with Node.js. No frameworks. No build step. Just works.
