effect-ai-cli
v0.1.3
Published
Effect-based AI CLI with pluggable models and observability
Readme
Effect AI CLI
A comprehensive TypeScript CLI application built with Effect-TS for managing AI-powered pattern processing, run management, and observability.
Overview
The Effect AI CLI is a production-ready command-line interface that demonstrates advanced Effect-TS patterns including service composition, resource management, observability, and AI integration. It provides tools for managing AI workflows, tracking metrics, and maintaining run history.
Execution Plans:
- By default, LLM calls use an ExecutionPlan with sensible retries and provider fallbacks.
- You can override attempts and timing and also customize fallback provider/model order with the
plancommand.
Features
Core Capabilities
- AI Integration: Seamless integration with multiple AI providers (OpenAI, Anthropic, Google)
- Run Management: Complete lifecycle management for AI processing runs
- Metrics Tracking: Comprehensive metrics collection and reporting
- Observability: Full OpenTelemetry integration for tracing and monitoring
- Configuration Management: Flexible configuration with environment variables
- Authentication: Secure API key management
- Extensibility: Plugin system to add custom commands via
CliPlugin
Development Setup
Prerequisites
- Node.js 20+
- Bun (recommended package manager)
Installation
# Clone the repository
git clone <repository-url>
cd effect-ai-cli
# Install dependencies
bun install
# Build the project
bun run build
# Run tests
bun run test
# Start development mode
bun run devAvailable Scripts
bun run build- Build the projectbun run build:watch- Build with watch modebun run dev- Start development mode with watchbun run start- Run the CLI directlybun run test- Run testsbun run test:watch- Run tests in watch modebun run test:coverage- Run tests with coveragebun run test:ui- Run tests with UIbun run lint- Run linterbun run lint:fix- Fix linting issuesbun run format- Format codebun run type-check- Type check without building
Quick Start
# Generate (streams by default)
effect-ai-cli generate "Write a haiku about Effect"
# Configure execution plan overrides
effect-ai-cli plan create --retries 2 --retry-ms 1200 \
--fallbacks openai:gpt-4o-mini,anthropic:claude-3-5-haiku
effect-ai-cli plan list
# View metrics for recent runs
effect-ai-cli metrics last
effect-ai-cli metrics report --format consoleCommands
Core Commands
effect-ai-cli list- List available patternseffect-ai-cli generate(aliasgen) - Generate with AI- Input forms: inline text, file path, or stdin (
--stdin) - Streaming by default for text format; buffer with
--no-stream -o, --output <path>write full output to file (tee when streaming)-p, --provider <openai|anthropic|google>select provider-m, --model <name>select model-f, --format <text|json>select output format (default: text)--jsonconvenience for--format=json-s, --schema-prompt <file>required when--format=json--quietsuppress stdout (useful with--output)- Generation params:
--temperature,--max-tokens,--top-p,--seed
- Input forms: inline text, file path, or stdin (
effect-ai-cli health- Check system healtheffect-ai-cli config- Manage configurationeffect-ai-cli auth- Manage authenticationeffect-ai-cli model- Manage AI modelseffect-ai-cli trace- View traceseffect-ai-cli dry-run- Test without execution
Execution Plan Management
effect-ai-cli plan create— Set plan overrides--retries <n>number of retries for the primary provider (attempts = retries + 1). Default: 1 retry--retry-ms <ms>delay between attempts for the primary. Default: 1000--fallbacks <list>comma-separatedprovider:modelfallbacks, e.g.openai:gpt-4o-mini,anthropic:claude-3-5-haiku
effect-ai-cli plan list— Show the current plan (effective defaults if unset)effect-ai-cli plan clear— Remove overrideseffect-ai-cli plan reset— Reset to defaults
Defaults
- Primary: 2 attempts (1 retry) with 1000ms spacing
- Fallbacks:
openai:gpt-4o-minithenanthropic:claude-3-5-haiku, each 1 attempt with 1500ms spacing
Note: process-prompt remains available as a legacy alias for backward compatibility.
Run Management
effect-ai-cli runs list- List all runseffect-ai-cli runs create- Create a new runeffect-ai-cli runs update- Update run informationeffect-ai-cli runs delete- Delete a run
Metrics
effect-ai-cli metrics report— Report metrics--format <console|json|jsonl>(default: console)-o, --output <path>whenjsonorjsonl, write to file
effect-ai-cli metrics last— Pretty table for the most recent runeffect-ai-cli metrics clear— Clear metrics history
Architecture
Service Architecture
The CLI uses a modern Effect-TS service architecture with:
- Effect.Service Pattern: All services use the modern Effect.Service pattern
- Layer Composition: Proper service layer composition with dependency injection
- Resource Management: Scoped resource management with automatic cleanup
- Error Handling: Comprehensive error handling with typed errors
- Testing: Full test coverage with real services (no mocks)
Project Structure
src/
├── bin/ # CLI entry points
├── commands/ # CLI command implementations
├── config/ # Configuration constants
├── core/ # Core CLI framework
├── runtime/ # Runtime configurations
├── services/ # Business logic services
└── __tests__/ # Test filesContributing
See CONTRIBUTING.md for development guidelines.
Release Process
- CHANGELOG.md - Version history and changes
- docs/SEMANTIC_VERSIONING.md - Versioning strategy and policies
- docs/RELEASE_WORKFLOW.md - Release process and workflow
License
This project is licensed under the MIT License - see the LICENSE file for details.
