@fastino-ai/pioneer-cli
v0.2.0
Published
Pioneer CLI - AI training platform with chat agent
Readme
Pioneer CLI
Command-line interface for the Pioneer AI training platform with an intelligent chat agent.
██████╗ ██╗ ██████╗ ███╗ ██╗███████╗███████╗██████╗
██╔══██╗██║██╔═══██╗████╗ ██║██╔════╝██╔════╝██╔══██╗
██████╔╝██║██║ ██║██╔██╗ ██║█████╗ █████╗ ██████╔╝
██╔═══╝ ██║██║ ██║██║╚██╗██║██╔══╝ ██╔══╝ ██╔══██╗
██║ ██║╚██████╔╝██║ ╚████║███████╗███████╗██║ ██║
╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝╚══════╝╚══════╝╚═╝ ╚═╝Features
- Interactive Chat Agent: Claude Code-like experience with bash execution, file operations, and code sandbox
- ML Integrations: Modal.com for serverless GPU compute, Weights & Biases for experiment tracking
- Self-Evolution: Autonomous self-improvement with feedback collection and model fine-tuning
- Budget Management: Token, time, and cost tracking with configurable limits
- Pioneer Platform: Full access to Pioneer AI training platform APIs
Installation
Quick Install (requires Bun)
curl -fsSL https://pioneer.ai/install.sh | shManual Install
# Install Bun if needed
curl -fsSL https://bun.sh/install | bash
# Clone and install
git clone https://github.com/fastino-ai/pioneer-cli.git
cd pioneer-cli/
bun install
# Run directly
bun run src/index.tsx --helpChat Agent
The chat agent provides a Claude Code-like experience with powerful capabilities:
Starting the Chat
# Interactive chat mode
pioneer chat
# With specific provider/model
pioneer chat --provider anthropic --model claude-sonnet-4-20250514
pioneer chat --provider openai --model gpt-4o
# Run a single command
pioneer chat --message "Create a Python script that analyzes CSV files"Capabilities
| Capability | Description |
|------------|-------------|
| Bash Execution | Run shell commands, install packages, manage system |
| File Operations | Read, write, edit, search files and directories |
| @ File References | Reference local files with @path syntax like Claude Code |
| Code Sandbox | Execute Python, JavaScript, TypeScript, Bash, Ruby, Go in isolation |
| Modal.com | Deploy ML workloads on serverless GPUs |
| Weights & Biases | Track experiments and metrics |
| Model Training | Fine-tune models with LoRA or full training |
@ File References
Reference local files and directories using the @ syntax, similar to Claude Code:
# Reference a single file
> Explain @src/index.tsx
# Reference multiple files
> Compare @package.json and @tsconfig.json
# Reference a directory (shows tree structure)
> What's in @src/
# Relative paths work too
> Look at @./config.ts and @../other-project/file.tsWhen you use @path, the file contents are automatically:
- Read from your filesystem
- Included in the context sent to the LLM
- Displayed with a checkmark indicator
This allows the agent to see and understand your code without manually reading files.
Chat Commands
| Command | Description |
|---------|-------------|
| /help | Show available commands |
| /clear | Clear conversation history |
| /tools | List available tools |
| /budget | Show token/cost usage |
| /exit | Exit the chat |
Environment Variables
# LLM Provider Keys (required for chat)
export ANTHROPIC_API_KEY="your-anthropic-key"
export OPENAI_API_KEY="your-openai-key"
# Pioneer Platform
export PIONEER_API_URL="https://agent.pioneer.ai"
export PIONEER_API_KEY="your-pioneer-key"
# ML Integrations (optional)
export MODAL_TOKEN_ID="your-modal-token-id"
export MODAL_TOKEN_SECRET="your-modal-token-secret"
export WANDB_API_KEY="your-wandb-key"Configuration
Configuration is stored in ~/.pioneer/config.json:
{
"apiKey": "your-pioneer-api-key",
"baseUrl": "https://agent.pioneer.ai",
"agent": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250514"
},
"budget": {
"maxTokens": 100000,
"maxCost": 1.0,
"maxTime": 3600
},
"ml": {
"wandb": {
"project": "my-project",
"entity": "my-team"
}
}
}Pioneer Platform Usage
# Authentication
pioneer auth login # Enter API key interactively
pioneer auth logout # Clear stored API key
pioneer auth status # Check if logged in
# Datasets
pioneer dataset list
pioneer dataset get <id>
pioneer dataset delete <id>
pioneer dataset download <id>
pioneer dataset analyze <id>
# Training Jobs
pioneer job list
pioneer job get <id>
pioneer job logs <id>
pioneer job create --model-name "My Model" --dataset-ids ds_123,ds_456
# Models
pioneer model list # List deployed models
pioneer model trained # List trained models
pioneer model delete <id>
pioneer model download <id>Architecture
src/
├── index.tsx # CLI entry point and routing
├── config.ts # Configuration management
├── api.ts # Pioneer platform API client
├── chat/
│ └── ChatApp.tsx # Interactive chat UI
├── agent/
│ ├── Agent.ts # Main agent orchestrator
│ ├── LLMClient.ts # OpenAI/Anthropic client
│ ├── ToolRegistry.ts # Tool management
│ ├── BudgetManager.ts # Budget tracking
│ └── types.ts # Type definitions
├── tools/
│ ├── bash.ts # Shell command execution
│ ├── filesystem.ts # File operations
│ ├── sandbox.ts # Code sandbox
│ ├── modal.ts # Modal.com integration
│ ├── wandb.ts # W&B integration
│ └── training.ts # Model training
└── evolution/
├── EvolutionEngine.ts # Self-improvement loop
├── FeedbackCollector.ts # Feedback storage
├── EvalRunner.ts # Evaluation framework
└── ModelTrainer.ts # Fine-tuningSelf-Evolution System
The agent can autonomously improve itself within budget constraints:
- Feedback Collection: Records successful/failed interactions
- Evaluation: Runs test cases to measure performance
- Training: Fine-tunes models based on feedback
- Budget-Aware: Respects token/time/cost limits
import { EvolutionEngine } from './evolution';
const engine = new EvolutionEngine({
targetScore: 0.9,
maxIterations: 10,
budgetPerIteration: {
maxTokens: 50000,
maxCost: 0.50,
},
});
await engine.evolve(agent);Development
cd pioneer
bun install
bun run dev # Hot reload
bun run typecheck # Type checking
bun run chat # Start chat agentTech Stack
- Runtime: Bun
- UI: Ink (React for CLI)
- Language: TypeScript
- LLMs: Anthropic Claude, OpenAI GPT
- ML Compute: Modal
- Experiment Tracking: Weights & Biases
License
MIT
