lm-oandaproxy
v0.1.0
Published
Oanda API proxy with unlimited historical candle depth — stores compressed OHLCV data, serves via Oanda-compatible REST v3 API
Maintainers
Readme
lm-oandaproxy
Oanda API proxy with unlimited historical candle depth.
Continuously captures OHLCV candle data from Oanda, stores it as compressed archives, and serves it via an Oanda v3-compatible REST API. Any Oanda client can point at this proxy and get years of historical data in a single request — no 5000-bar limit.
All other Oanda endpoints (accounts, orders, trades, positions, pricing, streaming) are transparently proxied to the real Oanda API.
Hosted Service
A public instance is available at https://oanda.langmart.ai — use it directly without any deployment.
Just swap your Oanda base URL and use your own Oanda Bearer token:
# Health check (no auth required)
curl https://oanda.langmart.ai/health
# Candles — use your own Oanda token
curl -H "Authorization: Bearer YOUR_OANDA_TOKEN" \
"https://oanda.langmart.ai/v3/instruments/EUR_USD/candles?granularity=H4&count=5000"
# All Oanda endpoints work — accounts, orders, trades, pricing, streaming
curl -H "Authorization: Bearer YOUR_OANDA_TOKEN" \
"https://oanda.langmart.ai/v3/accounts"Both practice and live Oanda tokens work. Your token determines everything:
| Your token type | Candle history | Live fill | Orders, trades, accounts |
|----------------|---------------|-----------|-------------------------|
| Practice | Shared archive | Fetched from api-fxpractice.oanda.com | Proxied to api-fxpractice.oanda.com |
| Live | Shared archive | Fetched from api-fxtrade.oanda.com | Proxied to api-fxtrade.oanda.com |
Candle history is shared because Oanda practice and live return identical OHLCV market data — only account-specific endpoints (orders, trades, positions, balance) differ between environments.
The hosted instance captures data for major forex pairs, commodities, and metals across 8 timeframes (S5 through H8).
Self-Hosted Setup
Install
npm install -g lm-oandaproxyOr run directly:
npx lm-oandaproxyRequires Node.js 18+.
Quick Start
# 1. Login with your Oanda API token
lm-oandaproxy login
# 2. Start the capture daemon (fetches candle data in background)
lm-oandaproxy capture start
# 3. Start the API server
lm-oandaproxy server start
# 4. Use it — same as the real Oanda API
curl -H "Authorization: Bearer YOUR_TOKEN" \
http://localhost:8900/v3/instruments/EUR_USD/candles?granularity=H4&count=5000Point any Oanda client at http://localhost:8900 instead of https://api-fxpractice.oanda.com.
Install as System Service
# Install systemd services (auto-restart on crash, auto-start on reboot)
lm-oandaproxy install-service
# Remove services
lm-oandaproxy remove-serviceThis creates and enables two systemd services:
lm-oandaproxy-server— API server (port 8900)lm-oandaproxy-capture— background data capture daemon
Both auto-detect the current Node.js path, working directory, user, and port configuration. Requires Linux with systemd and sudo access.
After installing, manage with standard systemctl:
sudo systemctl status lm-oandaproxy-server
sudo systemctl restart lm-oandaproxy-capture
journalctl -u lm-oandaproxy-server -fUpdating
cd /path/to/lm-oandaproxy
git pull && npm run build
sudo systemctl restart lm-oandaproxy-server lm-oandaproxy-captureHow It Works
lm-oandaproxy (port 8900)
┌─────────────────────────┐
Client request │ │
GET /v3/.../candles ──>│ Stored data (gzip) │
│ + live Oanda fill │──> Response (unlimited bars)
│ │
GET /v3/accounts/... ─>│ Proxy to real Oanda │──> Response (passthrough)
POST /v3/.../orders ──>│ (practice or live) │──> Response (passthrough)
└─────────────────────────┘
^
┌──────────┴──────────┐
│ Capture Daemon │
│ S5 every 3h │
│ M1 every 12h │
│ M5/M15 every 3d │
│ M30-H8 every 14d │
└─────────────────────┘Candle requests are served from the compressed local archive, merged with a live Oanda fetch to fill the gap to the current bar. The client always gets the most recent data.
Everything else (accounts, orders, trades, positions, pricing, streaming) is proxied directly to the real Oanda API. POST, PUT, PATCH, DELETE all work.
CLI Reference
lm-oandaproxy <command>
login [--practice|--live] Verify and save Oanda API credentials
logout [practice|live|all] Remove saved credentials
status Show credentials, services, storage
server start [--port N] [--public] Start API server (background)
server stop Stop API server
server foreground [--port N] [--public] Run server in foreground
capture start Start capture daemon (background)
capture stop Stop capture daemon
capture run-once Single fetch cycle (foreground)
install-service Install systemd services (Linux, requires sudo)
remove-service Stop and remove systemd services
version Show versionAPI Endpoints
Local (stored data + live fill)
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | /v3/instruments/{instrument}/candles | Candle data (no 5000-bar limit for from+to queries) |
| GET | /v3/accounts/{id}/instruments/{instrument}/candles | Same, account-scoped |
| GET | /v3/instruments | List instruments with available granularities |
| GET | /v3/accounts/{id}/instruments | Same, account-scoped |
Proxied to Oanda
| Method | Endpoint | Description |
|--------|----------|-------------|
| GET | /v3/accounts | List accounts |
| GET | /v3/accounts/{id}/summary | Account summary |
| POST | /v3/accounts/{id}/orders | Create order |
| PUT | /v3/accounts/{id}/orders/{id}/cancel | Cancel order |
| GET | /v3/accounts/{id}/openTrades | List open trades |
| PUT | /v3/accounts/{id}/trades/{id}/close | Close trade |
| GET | /v3/accounts/{id}/pricing | Live pricing |
| GET | /v3/accounts/{id}/pricing/stream | Price stream (chunked) |
| ... | /v3/* | All other Oanda v3 endpoints |
All 32 Oanda v3 endpoints are covered.
Server-only
| Method | Endpoint | Auth | Description |
|--------|----------|------|-------------|
| GET | /health | Public | Health check |
| GET | /status | Public | Storage stats, per-instrument detail, disk usage |
Authentication
Use the same Authorization header as the real Oanda API:
Authorization: Bearer YOUR_OANDA_TOKENThe server auto-detects your token type and routes accordingly:
- Practice tokens (two hex halves joined by hyphen) route to
api-fxpractice.oanda.com - Live tokens (different format) route to
api-fxtrade.oanda.com
This means you can use a single proxy endpoint for both practice and live trading. Candle history is shared (same market data), while account operations (orders, trades, positions) go to the correct Oanda environment based on your token.
No registration or API key needed for the proxy itself — just your existing Oanda token.
Use --public flag to disable auth entirely (for local development only).
Credentials
Dev mode — .env file in project root:
OANDA_API_TOKEN=your-token-here
OANDA_ACCOUNT_ID=101-003-XXXXXXX-XXXProd mode — saved via CLI:
lm-oandaproxy login
# Verifies token with Oanda, saves to ~/.lm-oandaproxy/config.jsonData Storage
Candle data is stored as gzip-compressed JSONL (raw Oanda format, no transformation):
~/.lm-oandaproxy/
config.json # Saved credentials
data/
{INSTRUMENT}/
S5.jsonl.gz # 5-second candles
M1.jsonl.gz # 1-minute candles
...
H8.jsonl.gz # 8-hour candles
.state.json # Last fetch state per instrument/TF
logs/
capture-YYYY-MM-DD.log
server-YYYY-MM-DD.log
capture-YYYY-MM-DD.log.gz # Old logs compressed, never deletedTimeframes Captured
| TF | Poll Interval | 5000 bars covers | |----|--------------|-----------------| | S5 | 3 hours | 6.9 hours | | M1 | 12 hours | 3.5 days | | M5 | 3 days | 17 days | | M15 | 3 days | 52 days | | M30 | 14 days | 104 days | | H1 | 14 days | 208 days | | H4 | 14 days | 833 days | | H8 | 14 days | 1667 days |
Compression
Raw Oanda JSON is stored at ~135 bytes/bar, compressed to ~12-21 bytes/bar (7-11x ratio). Monthly growth for 10 instruments across all 8 timeframes: ~21 MB compressed.
S5 data has a rolling window cap of 2M bars (~9 months) to bound memory usage during merges.
Programmatic API
import { createServer, runDaemon, loadConfig } from 'lm-oandaproxy';
// Start server programmatically
const server = createServer(8900);
server.listen(8900);
// Or run capture daemon
await runDaemon();Log Management
- Daily log files:
capture-{date}.log,server-{date}.log - Active log capped at 10 MB (oldest half truncated)
- Logs older than 2 days are compressed to
.log.gz - Compressed logs are kept forever (never deleted)
License
AGPL-3.0-or-later
