hacknotts-cli
v1.0.0
Published
Interactive AI-powered CLI for HackNotts 2025 - Chat with multiple AI providers through an elegant terminal interface
Maintainers
Readme
hacknotts-cli

An interactive AI-powered command-line interface for HackNotts 2025 - Chat with multiple AI providers through an elegant terminal interface.
Overview
hacknotts-cli is a sophisticated AI interaction tool that brings multiple AI providers to your terminal. Built with React and Ink, it provides a beautiful, interactive chat experience with support for 10+ AI providers, an extensible plugin system, and MCP (Model Context Protocol) integration.
Features
- Multi-Provider Support - Seamlessly switch between OpenAI, Anthropic, Google, xAI, DeepSeek, Azure, OpenRouter, and more
- Interactive Terminal UI - Beautiful React-based interface powered by Ink with animations and streaming responses
- Context-Aware AI - Automatic HACKNOTTS.md context loading for project-specific conversations
- Extensible Plugin System - Pre/post hook architecture with built-in plugins for tools, web search, and logging
- Built-in Commands -
/help,/provider,/export,/model,/init,/about, and more - Chat History Export - Save conversations to JSON or Markdown formats
- MCP Integration - Built-in tools for fetch, filesystem, memory, time, and sequential thinking
- Provider Dashboard - Visual provider status with easy switching and configuration
- Streaming Responses - Real-time message streaming from AI models with animated display
- Configuration Persistence - Save preferences, default provider, and working directory
- Working Directory Management - Change context with
/cdcommand for file operations - Beautiful Animations - Gradient effects, loading spinners, and smooth transitions
- Full TypeScript Support - Type-safe development experience across all packages
Installation
git clone https://github.com/Pleasurecruise/hacknotts-cli.git
cd hacknotts-cli
npm install
npm run buildThe CLI will be installed globally as hacknotts command.
Quick Start
- Configure your API keys - Create a
.envfile in the project root:
# OpenAI
OPENAI_API_KEY=your-openai-api-key
OPENAI_MODEL=gpt-4
# Anthropic
ANTHROPIC_API_KEY=your-anthropic-api-key
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022
# Google
GOOGLE_API_KEY=your-google-api-key
GOOGLE_MODEL=gemini-1.5-pro
# DeepSeek
DEEPSEEK_API_KEY=your-deepseek-api-key
DEEPSEEK_MODEL=deepseek-chat
# Add other providers as needed- Run the CLI:
hacknotts- Start chatting:
Welcome to the HackNotts 2025 CLI!
> hi
Hello! How can I assist you with your HackNotts 2025 project submission today?
> /help
Available commands:
/help (h, ?) - Show this help message
/provider (p) - Switch AI provider
/model (m) - Switch model
/export (save) - Export chat history
/clear (cls, c) - Clear chat
/exit (quit, q) - Exit applicationAvailable Commands
| Command | Aliases | Description |
|---------|---------|-------------|
| /help | h, ? | Show available commands |
| /provider | p, providers | Open provider dashboard to switch providers |
| /model <name> | m | Temporarily switch to a specific model |
| /export [format] | save, download | Export chat history (json or markdown) |
| /clear | cls, c | Clear chat history |
| /cd <path> | chdir | Change working directory |
| /init | initialize | Initialize HACKNOTTS.md context file in current directory |
| /about | info | Show application information and credits |
| /exit | quit, q | Exit the application |
Supported AI Providers
hacknotts-cli supports the following AI providers:
- OpenAI - GPT-4, GPT-3.5, and custom models
- Anthropic - Claude 3.5 Sonnet, Claude 3 Opus/Haiku
- Google - Gemini 1.5 Pro/Flash
- xAI - Grok models
- DeepSeek - DeepSeek Chat
- Azure OpenAI - Azure-hosted OpenAI models
- OpenRouter - Access to multiple models through OpenRouter
- OpenAI Compatible - Custom endpoints compatible with OpenAI API
Each provider can be configured with:
- API keys
- Custom base URLs
- Default models
- Provider-specific options
HACKNOTTS.md Context System
The CLI features an intelligent context loading system through HACKNOTTS.md files:
Automatic Context Loading
When you start a chat session, the CLI automatically searches for and loads HACKNOTTS.md from:
- Current working directory
- Parent directories (up to repository root)
This provides AI assistants with crucial project context, including:
- Project overview and goals
- Technology stack
- Current status and progress
- Team information
- Important notes and decisions
Creating a Context File
Use the /init command to generate a template:
> /init
Created HACKNOTTS.md in /your/project/pathThe generated template includes sections for:
- Project Overview
- Technology Stack
- Project Goals
- Current Status (Completed/In Progress/TODO)
- Important Notes
- Team Members
Benefits
- Better AI Understanding - AI gets full context about your project
- Consistent Responses - All team members' AI interactions use same context
- Progress Tracking - Keep your project status documented
- Seamless Collaboration - Share project knowledge across the team
Configuration
Environment Variables
Configure providers through .env file:
# Provider API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
XAI_API_KEY=...
DEEPSEEK_API_KEY=...
AZURE_OPENAI_API_KEY=...
# Custom Models (optional)
OPENAI_MODEL=gpt-4
ANTHROPIC_MODEL=claude-3-5-sonnet-20241022
GOOGLE_MODEL=gemini-1.5-pro
# Custom Endpoints (optional)
OPENAI_BASE_URL=https://api.openai.com/v1User Configuration
User preferences are stored in ~/.hacknotts-cli/config.json:
{
"defaultProvider": "openai",
"defaultModel": "gpt-4",
"theme": "default",
"workingDirectory": "/path/to/project"
}Configuration is automatically managed by the CLI:
- Default Provider - Set via provider dashboard or
/providercommand - Default Model - Persists your preferred model for each provider
- Working Directory - Saved when using
/cdcommand for persistent context
Advanced Provider Configuration
For custom or self-hosted providers:
# OpenAI Compatible Provider
OPENAI_COMPATIBLE_API_KEY=your-key
OPENAI_COMPATIBLE_BASE_URL=https://your-endpoint.com/v1
OPENAI_COMPATIBLE_MODEL=your-model
# Azure OpenAI
AZURE_OPENAI_API_KEY=your-azure-key
AZURE_OPENAI_RESOURCE_NAME=your-resource-name
AZURE_OPENAI_DEPLOYMENT_NAME=your-deployment
AZURE_OPENAI_API_VERSION=2024-02-15-preview
# OpenRouter
OPENROUTER_API_KEY=your-openrouter-key
OPENROUTER_MODEL=anthropic/claude-3.5-sonnetPlugin System
hacknotts-cli features an extensible plugin architecture:
Plugin Hooks
Plugins can implement various hooks:
- First-Hook -
resolveModel,loadTemplate(returns first valid result) - Sequential-Hook -
configureContext,transformParams,transformResult(chain transformations) - Parallel-Hook -
onRequestStart,onRequestEnd,onError(side effects) - Stream-Hook -
transformStream(stream processing)
Built-in Plugins
The CLI includes several pre-built plugins:
Tool Use Plugin
Enables AI models to use tools and execute functions during conversations. Configure with createPromptToolUsePlugin to add custom tool capabilities.
Web Search Plugin
Integrates web search capabilities with provider-specific configurations:
- OpenAI web search
- Anthropic web search
- Google search
- xAI search
- OpenRouter search
Google Tools Plugin
Provides access to Google-specific tools and integrations for enhanced functionality with Gemini models.
Logging Plugin
Adds comprehensive logging capabilities for debugging and monitoring AI interactions. Track requests, responses, and errors with customizable log levels.
Middleware System
The plugin system is built on a flexible middleware architecture:
- Named Middlewares - Organize and manage multiple middleware layers
- Middleware Wrapping - Apply transformations to model inputs/outputs
- Chain Processing - Sequential middleware execution with context passing
- Error Handling - Graceful error catching and recovery
Creating Custom Plugins
Use definePlugin to create your own plugins:
import { definePlugin } from '@cherrystudio/ai-core'
const myPlugin = definePlugin({
id: 'my-custom-plugin',
name: 'My Custom Plugin',
hooks: {
configureContext: async (context) => {
// Modify request context
return context
},
transformResult: async (result, context) => {
// Transform AI response
return result
}
}
})MCP Integration
Built-in MCP tools for extended functionality:
- Fetch Tool - Make HTTP requests with custom headers and authentication
- Filesystem Tool - Read, write, list, and manage files in the workspace
- Memory Tool - Persistent key-value storage for cross-session data
- Time Tool - Get current time, timezone info, and format timestamps
- Sequential Thinking Tool - Multi-step reasoning with memory and session management
MCP Server Integration
The toolkit package provides MCP server capabilities:
- Server Management - Start and manage MCP servers
- Tool Discovery - Automatically discover available tools from servers
- Tool Execution - Execute tools with proper parameter validation
- Session Handling - Manage persistent sessions across interactions
Create custom MCP plugins with createMcpPlugin:
import { createMcpPlugin } from 'toolkit'
const mcpPlugin = createMcpPlugin({
enabled: true,
servers: [
{
name: 'my-server',
command: 'node',
args: ['server.js']
}
]
})Export Functionality
Export your chat history in multiple formats:
# Export to JSON
> /export json
# Export to Markdown
> /export markdown
# Default export (JSON)
> /exportExported files include:
- Full message history
- Timestamps
- Role indicators
- Message metadata
User Interface Features
Terminal Animations
The CLI includes several visual enhancements for a premium user experience:
- Animated Logo - Smooth logo animation on startup with gradient effects
- Loading Spinners - Context-aware loading indicators during AI processing
- Gradient Effects - Beautiful animated gradients throughout the interface
- Smooth Transitions - Fluid view transitions between chat, help, and provider dashboard
- Real-time Streaming - Character-by-character display of AI responses
Interactive Components
- Provider Dashboard - Navigate through providers with keyboard controls (↑/↓ arrows)
- Command List - Searchable command list with syntax highlighting
- Status Bar - Persistent status display showing current provider, model, and working directory
- Help System - Comprehensive help view with command documentation and examples
- About View - Application information, version, and credits
Keyboard Controls
- Ctrl+C - Interrupt current AI generation or cancel input
- ↑/↓ - Navigate command history or provider list
- Tab - Command auto-completion (when available)
- Esc - Close modal views (help, provider dashboard, about)
Development
Project Structure
hacknotts-cli/
├── packages/
│ ├── cli/ # Main CLI application
│ │ ├── src/
│ │ │ ├── index.tsx # Entry point
│ │ │ ├── app.tsx # Main app component
│ │ │ ├── components/
│ │ │ │ ├── ChatSession.tsx # Main chat session logic
│ │ │ │ ├── ChatInterface.tsx # Chat UI component
│ │ │ │ ├── ProviderView.tsx # Provider dashboard
│ │ │ │ ├── HelpView.tsx # Help command view
│ │ │ │ ├── AboutView.tsx # About information
│ │ │ │ ├── StatusBar.tsx # Status display
│ │ │ │ ├── LoadingSpinner.tsx # Loading animation
│ │ │ │ ├── LogoAnimation.tsx # Logo with animations
│ │ │ │ └── AnimatedGradient.tsx # Gradient effects
│ │ │ ├── commands/
│ │ │ │ ├── CommandRegistry.ts # Command system
│ │ │ │ └── builtInCommands.ts # Built-in commands
│ │ │ ├── services/
│ │ │ │ ├── aiService.ts # AI provider service
│ │ │ │ └── configService.ts # Configuration management
│ │ │ └── utils/
│ ├── aiCore/ # AI provider and plugin system
│ │ ├── src/
│ │ │ ├── core/
│ │ │ │ ├── providers/ # Provider implementations
│ │ │ │ ├── plugins/ # Plugin system
│ │ │ │ │ └── built-in/ # Built-in plugins
│ │ │ │ ├── middleware/ # Middleware architecture
│ │ │ │ ├── models/ # Model resolution
│ │ │ │ ├── options/ # Provider options
│ │ │ │ └── runtime/ # Execution runtime
│ └── toolkit/ # MCP utilities
│ └── src/
│ ├── manager.ts # MCP manager
│ ├── plugin.ts # MCP plugin integration
│ ├── servers/ # MCP server implementations
│ └── tools/ # Built-in MCP tools
├── docs/
├── .env # Provider configuration
└── package.jsonArchitecture
Three-Layer Design:
- CLI Layer (
packages/cli) - User interface, commands, and interaction logic - AI Core Layer (
packages/aiCore) - Provider abstraction, plugin system, and execution runtime - Toolkit Layer (
packages/toolkit) - MCP integration and tool implementations
Build Commands
# Development with watch mode
npm run dev
# Production build
npm run build
# Run tests
npm test
# Lint and format
npm run lint
npm run formatTesting
# Run tests once
npm test
# Watch mode
npm run test:watchUsage Examples
Basic Conversation
> What is the capital of France?
The capital of France is Paris.
> /model gpt-3.5-turbo
✓ Switched to model: gpt-3.5-turbo
> Tell me more about it
Paris is the largest city in France and has been the country's capital since...Working with Files
> /cd ./my-project
✓ Changed working directory to: ./my-project
> Can you read the README.md file?
[AI uses filesystem tool to read and analyze README.md]
> /init
✓ Created HACKNOTTS.md in /path/to/my-projectSwitching Providers
> /provider
# Opens provider dashboard
# Use ↑/↓ to navigate, Enter to select
> Tell me a joke
[Response from newly selected provider]Exporting Conversations
> This is important information I want to save
> /export markdown
✓ Exported 12 messages to chat-history-2025-10-26.md (MARKDOWN)
> /export json
✓ Exported 12 messages to chat-history-2025-10-26.json (JSON)Best Practices
For Developers
- Use HACKNOTTS.md - Create context files for your projects to give AI better understanding
- Organize by Directory - Use
/cdto switch between project contexts - Export Important Sessions - Save breakthrough conversations with
/export - Test Multiple Providers - Different providers excel at different tasks
- Leverage Plugins - Enable web search and tools for enhanced capabilities
For Hackathons
- Document as You Go - Update HACKNOTTS.md with progress
- Quick Provider Switching - Test ideas across different models
- Code Review - Use AI to review and improve your code
- Architecture Planning - Discuss system design with context-aware AI
- Debug Together - Paste errors and get instant help
Performance Tips
- API Key Management - Keep keys in
.env, never commit them - Model Selection - Use faster models for quick tasks, advanced models for complex problems
- Clear History - Use
/clearto free memory in long sessions - Working Directory - Set correct directory for file operations
Requirements
- Node.js >= 22.12.0
- npm or yarn
- API keys for at least one AI provider
Troubleshooting
Common Issues
CLI Not Found After Installation
# Reinstall globally
npm run build
# Or manually link
npm linkAPI Key Errors
# Verify .env file exists in project root
ls -la .env
# Check environment variables are loaded
cat .envProvider Not Initializing
- Ensure API key is valid and has correct format
- Check base URL if using custom endpoints
- Verify network connectivity
- Review model name spelling
Streaming Issues
- Some providers may have rate limits
- Check terminal supports ANSI colors
- Try different terminal emulator if display issues occur
File Operation Errors
- Verify working directory exists:
/cd /path/to/dir - Check file permissions
- Ensure paths are absolute or relative to working directory
Debug Mode
Enable verbose logging by setting environment variable:
DEBUG=hacknotts:* hacknottsGetting Help
- Check
/helpfor command documentation - Review
/aboutfor version information - Visit Issues for known problems
- Join HackNotts Discord for community support
Contributing
Contributions are welcome! This project was created for HackNotts 2025.
Authors
License
MIT License - see LICENSE file for details
References
- Cherry Studio - Inspiration for the AI core architecture
- Vercel AI SDK - Underlying AI SDK
- Ink - React for terminal interfaces
- Model Context Protocol - MCP specification
Acknowledgments
Created for HackNotts 2025 - A hackathon project demonstrating advanced AI integration in CLI applications.
Happy Hacking! 🚀
