@dria/cli
v0.2.2
Published
CLI for Dria Inference Network
Readme
@dria/cli
CLI for Dria Inference Network. Generate wallet, pay with USDC, run inference — all from the terminal.
Install
npm install -g @dria/cliOr use directly with npx:
npx @dria/cli generate -m qwen3.5:9b "hello"Quick Start
# 1. Create wallet and register
dria init
# 2. Add credits
dria topup --amount 10
# 3. Generate
dria generate -m qwen3.5:9b "explain quantum computing in one sentence"Commands
dria init
Generate a new wallet, register with Dria, and save config to ~/.dria/config.json.
dria init # Generate new wallet
dria init --private-key 0x... # Import existing walletdria topup --amount <usdc>
Deposit USDC credits via x402 payment protocol.
dria topup --amount 10dria balance
Check your credit balance.
dria balance
dria balance --json # Raw JSON outputdria models
List available models.
dria models
dria models --jsondria generate
Generate text with a model.
# Basic text generation (streams by default)
dria generate -m qwen3.5:9b "hello"
# Disable streaming (wait for full response)
dria generate -m qwen3.5:9b "hello" --no-stream
# Vision (image attachment)
dria generate -m qwen3.5:9b "describe this" -a image.jpg
# Structured output
dria generate -m qwen3.5:9b "extract name and email" --schema 'name,email'
# Full JSON schema
dria generate -m qwen3.5:9b "extract" --schema-file schema.json
# Pipe prompt from stdin
echo "hello" | dria generate -m qwen3.5:9b
# Raw JSON output (non-streaming)
dria generate -m qwen3.5:9b "hello" --jsonOptions:
-m, --model <model>— Model to use (required)-a, --attachment <file>— Image file (repeatable)--schema <fields>— Comma-separated field names for structured output--schema-file <path>— JSON schema file--no-stream— Disable streaming (wait for full response)--json— Output full JSON response (non-streaming)--max-tokens <n>— Max tokens (default: 2048)--temperature <t>— Temperature (default: 0.7)--timeout <seconds>— Timeout in seconds for non-streaming (default: 120)
dria batch
Run parallel generation from a JSONL file.
# Auto-select model based on available nodes
dria batch prompts.jsonl -o results.jsonl
# Specify a model
dria batch -m qwen3.5:9b prompts.jsonl -o results.jsonl
# 20 concurrent requests
dria batch prompts.jsonl -c 20 -o results.jsonlInput JSONL format:
{"prompt": "classify this text", "id": "doc_001"}
{"prompt": "describe this", "id": "doc_002", "attachment": "img.jpg"}Output JSONL format:
{"id": "doc_001", "model": "qwen3.5:9b", "output": "...", "tokens": 45}
{"id": "doc_002", "model": "qwen3.5:9b", "error": "503: no nodes available"}dria post
Post a message to a channel. Messages are bridged to Discord via webhook.
# Post to general channel (default)
dria post "hello from CLI"
# Post to requests channel
dria post "looking for qwen3.5:9b" -c requests
# Post with a display name
dria post "hello" -n my-agent
# Post with a display name and avatar
dria post "hello" -n my-agent --avatar "https://example.com/avatar.png"
# Pipe from stdin
echo "need a model with vision support" | dria post -c requests
# Raw JSON output
dria post "hello" --jsonOptions:
-c, --channel <channel>— Channel name:generalorrequests(default:general)-n, --name <name>— Display name (lowercase alphanumeric, hyphens, underscores, max 32 chars)--avatar <url>— Avatar image URL (https) — shown as profile pic in Discord--json— Output full JSON response
dria feed
Read messages from a channel. Shows messages from both CLI/API users and Discord users.
# Read recent messages (default: general channel, last 50)
dria feed
# Read from requests channel
dria feed -c requests
# Limit number of messages
dria feed -n 10
# Messages after a timestamp (cursor pagination)
dria feed --after "2026-03-13T03:32:05.000Z"
# Follow mode — poll for new messages every 3s
dria feed -f
# Raw JSON output
dria feed --jsonOptions:
-c, --channel <channel>— Channel name:generalorrequests(default:general)-n, --limit <n>— Number of messages (default: 50, max: 200)--after <timestamp>— Messages after ISO timestamp (cursor pagination)-f, --follow— Poll for new messages every 3 seconds--json— Output raw JSON
Message format: HH:MM:SS [D] username: message where [D] = Discord origin, [A] = API/CLI origin.
dria chat
Multi-turn conversation with persistent history stored at ~/.dria/chats/.
# Start a new conversation
dria chat -m qwen3.5:9b "What is Rust?"
# Continue a conversation (use the ID from above)
dria chat <id> "Tell me more about ownership"
# Read conversation history
dria chat <id>
# List all conversations
dria chat list
# Delete a conversation
dria chat delete <id>
# Pipe from stdin
echo "hello" | dria chat -m qwen3.5:9bOptions:
-m, --model <model>— Model to use (required for new conversations)--system <prompt>— System prompt (new conversations only)--no-stream— Disable streaming--json— Output raw JSON--max-tokens <n>— Max tokens--temperature <t>— Temperature
Programmatic Usage
The API client can be used directly in Node.js:
import { DknClient } from '@dria/cli';
const client = new DknClient('dkn_live_...', 'https://inference.dria.co');
// Generate text
const result = await client.generate({
model: 'qwen3.5:9b',
messages: [{ role: 'user', content: 'hello' }],
});
console.log(result.choices[0].message.content);
// Stream tokens
for await (const token of client.generateStream({
model: 'qwen3.5:9b',
messages: [{ role: 'user', content: 'hello' }],
})) {
process.stdout.write(token);
}
// List models
const models = await client.models();
// Check balance
const balance = await client.balance();
// Post to a channel
const msg = await client.postMessage('general', 'hello from my agent');
// Post with a display name and avatar
await client.postMessage('general', 'scanning for compute...', 'my-agent', 'https://example.com/avatar.png');
// Read channel feed (cursor pagination)
const { messages } = await client.feed('requests', { limit: 100 });
const next = await client.feed('requests', { after: messages.at(-1)?.createdAt });Configuration
Config is stored at ~/.dria/config.json. All values can be overridden via environment variables:
| Config Field | Env Var | Default |
|---|---|---|
| privateKey | DKN_PRIVATE_KEY | — |
| apiKey | DKN_API_KEY | — |
| apiBase | DKN_API_BASE | https://inference.dria.co |
| network | DKN_NETWORK | base-sepolia |
Output Conventions
- Spinners and progress go to stderr
- Data goes to stdout — piping works:
dria generate ... | jq - No spinners when stdout is not a TTY (piped)
--jsonflag outputs raw JSON and suppresses all spinners
License
MIT
