ace-interview-prep
v0.2.0
Published
CLI tool for frontend interview preparation — scaffolds questions with tests, tracks progress, and provides LLM-powered feedback
Maintainers
Readme
ace
A CLI tool for interview prep focusing on frontend. Scaffolds questions with test cases, tracks progress with scorecards, and provides LLM-powered feedback.
Install
npm install -g ace-interview-prepQuick Start
1. Configure API Keys
ace setupStores your OpenAI / Anthropic API keys in ~/.ace/config.json (one-time, works across all workspaces).
If both keys are valid, ace setup prompts you to choose a default provider (openai or anthropic).
# Non-interactive
ace setup --openai-key sk-... --anthropic-key sk-ant-...
# Set default provider explicitly when both keys are configured
ace setup --openai-key sk-... --anthropic-key sk-ant-... --default-provider anthropic2. Initialize a Workspace
Navigate to any folder where you want to practice, then run:
ace initace init bootstraps the workspace and installs dependencies for you. It:
# Creates/updates workspace files
# - questions/
# - package.json (adds test scripts + devDependencies if missing)
# - tsconfig.json
# - vitest.config.ts
# - vitest.setup.ts
#
# Installs dependencies automatically via npm installIf you need to regenerate workspace files:
ace init --force3. Practice
# Generate a question interactively (prompts for category, difficulty, topic)
ace generate
# Or pass flags to skip prompts
ace generate --topic "debounce" --category js-ts --difficulty medium
# Interactive brainstorm mode
ace generate --brainstorm
# List all questions
ace list4. Test, Review, Track
All commands below work in three modes:
- Interactive — run with no arguments to pick from a selectable list
- Direct — pass a slug to target a specific question
- All — pass
--allto operate on every question
# Run tests
ace test # pick from list
ace test debounce # specific question
ace test --all # run all tests
ace test --watch # watch mode (with --all)
# Get LLM feedback on your solution
ace feedback # pick from list
ace feedback debounce # specific question
ace feedback --all # review all questions (confirms each one)
# View scorecard
ace score # pick from list
ace score debounce # specific question
ace score --all # show all scorecards
# Reset a question to its stub
ace reset # pick from list
ace reset debounce # specific question
ace reset --all # reset everything (with confirmation)5. Dispute Potentially Incorrect Tests
Use this when your implementation appears correct but a generated test assertion might be wrong.
# Dispute interactively (pick a question)
ace dispute
# Dispute a specific question
ace dispute debounce
# Optional: force a provider for dispute analysis
ace dispute debounce --provider anthropicIf the verdict says the test is incorrect (or ambiguous), ace can apply a corrected test file and re-run tests.
Question Categories
| Category | Slug | Type | Focus |
|----------|------|------|-------|
| JS/TS Puzzles | js-ts | Coding | Closures, async patterns, type utilities |
| React Components | web-components | Coding | Props, events, composition, reusable UI |
| React Web Apps | react-apps | Coding | Hooks, state, routing, full features |
| LeetCode Data Structures | leetcode-ds | Coding | Trees, graphs, heaps, hash maps |
| LeetCode Algorithms | leetcode-algo | Coding | DP, greedy, two pointers, sorting |
| System Design — Frontend | design-fe | Design | Component architecture, state, rendering |
| System Design — Backend | design-be | Design | APIs, databases, caching, queues |
| System Design — Full Stack | design-full | Design | End-to-end systems, trade-offs |
How It Works
- Generate a question — run
ace generateand follow the prompts (category, difficulty, topic), or useace generate --brainstormfor an interactive design session. - Open the question folder — read
README.mdfor the problem statement. - Write your solution in the solution file (
solution.ts,App.tsx,Component.tsx, ornotes.md). - Run tests with
ace testto pick a question and check your work. - Get feedback with
ace feedbackfor an LLM-powered code or design review. - Track progress with
ace scoreandace list.
Configuration
Global (~/.ace/) — API keys stored once, shared across all workspaces.
~/.ace/config.json— primary config (created byace setup)~/.ace/.env— fallback (dotenv format)- Environment variables — final fallback
Typical ~/.ace/config.json keys:
OPENAI_API_KEYANTHROPIC_API_KEYdefault_provider(set automatically or viaace setup --default-provider ...)
Workspace — each workspace gets its own questions/ directory and test config.
Development
git clone https://github.com/neel/ace-interview-prep.git
cd ace-interview-prep
npm install
npm run ace helpSee CONTRIBUTING.md for the full development guide.
