code-symphony
v0.23.1
Published
Convert code into music — analyze projects and hear what they sound like
Maintainers
Readme
CodeSymphony
Turn any codebase into music. CodeSymphony walks your project, analyzes its structure (functions, branching, depth, entropy), and composes a multi-voice score in the genre that fits. Everything runs on your machine — no code ever leaves your computer.
Install & Run
npx code-symphonyA local HTTP server starts at http://localhost:4173 and your browser opens. Pick a project folder and press Analyze. The first run takes a few seconds; reopening the same project is instant (disk-cached).
Landing page / version info: https://code-symphony-landing.web.app
Features
- Persistent recent projects — pick from your recent list on startup (VSCode-style).
--no-recentto skip. - Per-project disk cache keyed by file
mtime+.gitignore+ package version → same project = instant re-open - Deterministic output — same code, same music, every time (seeded motif + modulation)
- Music-theory-aware mapping (v0.4+):
- Strict weak-beat consonance — melody stays on chord tones except in genuine passing/neighbor contexts
- Appoggiatura resolution — dissonance always resolves to the nearest chord tone
- V→I cadence with leading-tone gesture
- Bridge/chorus modulation (relative minor/major, whole-step lift) for epic genres
- Parallel 5th / 8ve avoidance in harmony
- Walking bass with chromatic approach to the next bar's root (jazz)
- 9 genres — classical, jazz, rock, ambient, electronic, lofi, folk, blues, cinematic (auto-picks based on project metrics, or force with
--genre) - Playback — Web Audio, per-voice toggles, follow-playback, MIDI / JSON export
- Change project button without restart; Re-analyze on file changes
- Update banner checks for new versions in the background
- AI melody refiner (v0.14+) — optional local Anticipatory Music Transformer refines the melody while preserving harmony / bass / drums (see below)
CLI Options
| Flag | Default | What it does |
|------|---------|--------------|
| --port <n> | 4173 | HTTP port (falls back to a free port if busy) |
| --root <path> | last used | project root to analyze |
| --genre <name> | auto | force a genre |
| --max-files <n> | 500 | max files to scan |
| --max-bytes <n> | 10 MB | max total bytes |
| --no-open | | do not auto-open the browser |
| --no-recent | | ignore the recent-project list on startup |
| --prune | | purge ~/.code-symphony/cache and exit |
| --json | | one-shot analysis to stdout (CI-friendly, requires --root) |
| --ai-setup | | install the Python AI backend and exit (one-time) |
| --ai-model <n> | small | AI model size: small (360M) or medium (780M) |
| --ai-refine | | with --json, also run the AI refiner |
| --no-ai | | disable the local Python AI backend |
AI Refiner (optional)
The ✨ AI Enhance button replaces the deterministic melody with one generated by a local Anticipatory Music Transformer (Stanford CRFM), conditioned on the harmony + bass so it tracks the chord progression. Everything runs on your machine.
Minimum spec
- Apple Silicon (M1+) with 16 GB unified memory, or
- NVIDIA GPU with 4 GB+ VRAM, or
- any CPU (slower — ~60s per score vs ~10s on GPU)
- Python 3.11 / 3.12 / 3.13 / 3.14 on PATH
First-run setup
code-symphony --ai-setupCreates src/ai/.venv/ and installs torch, transformers, anticipation, mido (~1.5 GB). If you skip this, the first normal run auto-installs and tells you while it's happening. --no-ai skips AI entirely.
Models
| Model | Params | Weights | M4 16GB inference (60s of music) |
|---|---|---|---|
| small (default) | ~360M | ~1.4 GB | ~7–10 s |
| medium | ~780M | ~3 GB | ~20–30 s |
Pick on the server with --ai-model medium.
Architecture
- Python inference worker: long-running subprocess, loads the model once, serves newline-delimited JSON on stdin/stdout
- Node proxy:
/api/ai-refineforwards browser requests;/api/ai-statusreports readiness - Conditioning: harmony + bass passed as
controlsso the AI melody follows the chord progression past the seed
Privacy
All analysis, rendering, and playback happen locally. No code, file paths, or results are transmitted. The landing page (above) uses standard Firebase Hosting, which logs IP/User-Agent of visitors — that's the only web footprint.
How it works
- Scanner — walks the project, respects
.gitignore+ a sane blacklist (node_modules,dist,.git, etc.) - Analyzer — AST metrics (via
@babel/parserfor JS/TS) or regex token fallback for other languages - Mapper — deterministic code-to-music: language → root, depth/branching → mode & motif contour, functions/classes → form shape
- Web UI — VexFlow score + Web Audio synthesis + MIDI/JSON export
Storage
~/.code-symphony/
├── config.json # recent projects list
└── cache/<hash>/
├── meta.json # fingerprint + createdAt
├── analysis.json # genre-independent analysis
└── score-*.json # one per genrePrune manually with code-symphony --prune, or the disk cache self-prunes past 500 MB at startup.
License
MIT
