ohms-cli
v0.1.13
Published
OHMS AI Foundry CLI - Deploy and manage AI agents on Internet Computer
Maintainers
Readme
OHMS CLI
The official command-line interface for OHMS AI Foundry - deploy and manage AI agents on Internet Computer.
Installation
npm install -g ohms-cliQuick Start
Configure the CLI (first time only):
ohms configDeploy OHMS platform (if not already deployed):
ohms deployUpload and deploy an AI model:
ohms ingest model.onnx --name "my-model" --type "text"List deployed agents:
ohms listRun inference:
ohms predict <agent-id> "Hello, world!"
Commands
ohms config
Configure OHMS CLI settings including network and canister IDs.
Options:
--network <network>- Target network (local, mainnet)--registry <id>- Registry canister ID--orchestrator <id>- Orchestrator canister ID
ohms deploy
Deploy OHMS platform canisters to Internet Computer.
Options:
-n, --network <network>- Target network (local, ic) (default: local)-r, --reinstall- Reinstall canisters (destroys data)-u, --upgrade- Upgrade existing canisters
ohms ingest <model-path>
Upload and deploy an ONNX model as an agent.
Arguments:
model-path- Path to ONNX model file
Options:
-n, --name <name>- Agent name-t, --type <type>- Model type (text, image, code)-d, --description <desc>- Agent description--public- Make agent publicly accessible--skip-conversion- Skip ONNX to WASM conversion (for testing)
ohms list
List deployed agents.
Options:
-t, --type <type>- Filter by agent type (text, image, code)-f, --format <format>- Output format (table, json) (default: table)--filter <text>- Filter agents by name or description
ohms predict <agent> [input]
Run inference on a deployed agent.
Arguments:
agent- Agent ID or nameinput- Input data for prediction
Options:
-f, --format <format>- Input format (text, json) (default: text)-o, --output <format>- Output format (text, json) (default: text)
Examples
Deploy a Text Model
# Upload and deploy a language model
ohms ingest tinyllama.onnx --name "tinyllama" --type "text" --description "Tiny Llama language model"
# Run inference
ohms predict tinyllama "What is the capital of France?"Deploy an Image Model
# Upload and deploy an image classification model
ohms ingest mobilenet.onnx --name "mobilenet" --type "image" --description "MobileNet image classifier"
# Run inference (input as base64 or file path)
ohms predict mobilenet "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQ..."Deploy to Mainnet
# Deploy platform to mainnet
ohms deploy --network ic
# Configure for mainnet
ohms config --network mainnet
# Upload model to mainnet
ohms ingest model.onnx --name "production-model"Requirements
- Node.js 18.0.0 or higher
- dfx (DFINITY Canister SDK) for deployment
- Internet Computer identity (for mainnet)
Development
# Clone the repository
git clone https://github.com/OHMS-DeAI/ohms-core.git
cd ohms-core/packages/cli
# Install dependencies
npm install
# Build the CLI
npm run build
# Run in development mode
npm run dev
# Test the CLI
node dist/index.js --helpSupport
- Documentation: https://ohms.ai/docs
- Issues: https://github.com/OHMS-DeAI/ohms-core/issues
- Discord: https://discord.gg/ohms
License
MIT License - see LICENSE for details.
