glyph-compress
v1.9.3
Published
Semantic compression for IDE↔LLM communication using radical glyphs. Save 80%+ tokens.
Maintainers
Readme
⚡ GlyphCompress
Semantic compression for IDE↔LLM communication. Save 80%+ tokens with zero information loss.
GlyphCompress uses a compositional radical-based encoding system (inspired by Chinese logograms) to compress the verbose context exchanged between IDEs and Large Language Models. A shared codebook injected into the LLM's system prompt enables it to decode compact glyph sequences back into full semantic concepts.
🎬 See it in Action
Watch the latest YouTube video to see how GlyphCompress achieves 90% token savings:
- ⚙️ Data Flow Architecture: A graphical animation showing how the engine minifies and translates verbose code into dense semantic glyphs.
🎯 The Problem
Every IDE→LLM request carries massive, redundant context:
System prompt: ~2,000 tokens (repeated every time)
Open files: ~3,000 tokens
Errors/diagnostics: ~500 tokens
Chat history: ~2,000 tokens
User prompt: ~500 tokens
─────────────────────────────────────
TOTAL: ~8,000 tokens/requestAt 50 requests/day → 400K tokens/day → $6-12/day on Claude/GPT-4.
✨ The Solution
GlyphCompress intercepts outgoing LLM requests, compresses context using a shared codebook, and saves 80-90% of tokens:
BEFORE (1,734 chars):
{ prompt: "Fix the error in UserProfile.tsx",
files: [{ path: "src/components/UserProfile.tsx", content: "...44 lines..." }],
diagnostics: [{ code: "TS2339", message: "Property 'department' does not exist on type 'User'" }] }
AFTER (137 chars):
[F: ◈₍1₎=src/components/UserProfile.tsx]
⺌✗ ◈₍1₎
◈₍1₎ᵗ [imp:5 exp:1 ◇:4 ⟿:2 ⟳:5 44L]
◈₍1₎:42 ✗∉prop 'department'∉User
→ 12.7x compression, 92% savedNew in v1.9.3 (Proxy Diagnostics Hotfix)
- Upstream Status Logging: The local proxy now logs upstream HTTP status codes so Continue/Gemini failures show
200,400,404, or provider-specific errors immediately. - Redacted Error Bodies: Failed upstream responses are logged with API keys and bearer tokens redacted, making proxy debugging safer.
- Response Completion Tracking: Successful proxy responses now log completed byte counts and warn if the client closes the stream early.
v1.9.0 (Proxy and Packaging Hardening)
- Provider-Aware Proxy: CLI and VS Code proxy flows now preserve provider, trust policy, privacy mode, and target API settings instead of falling back to a generic
autoprofile. - Gemini-Compatible Routing: The proxy maps OpenAI-compatible
/v1/*requests to Gemini's/v1beta/openai/*endpoint when forwarding togenerativelanguage.googleapis.com. - Clean ESM Runtime Export: The package middleware ESM export now resolves through
src/glyph-middleware.js, avoiding Node package-scope warnings from the VS Code extension folder. - Focused npm Package: The npm allowlist now publishes runtime files and essential docs only, excluding outreach drafts, demo scripts, and broad internal documentation folders.
- VS Code Lifecycle Hardening: Proxy startup uses the CommonJS extension path, status-bar toggling handles hidden status bars, and the status interval is disposed with the extension context.
v1.8.0 (Safe Compression Trust Policies)
- Explicit Trust Policies: Added
lossless,reversible,privacy, andlossytrust policies so consumers can choose which transformations are allowed. - Transformation Gating:
losslesspreserves user text,reversibleblocks code minification/summaries,privacyenables redaction, andlossypermits aggressive/ultra summaries. - Trust Metadata: Source maps now include
sourceMap.trustPolicyandsourceMap.trustso downstream tools can audit compression guarantees. - CLI Trust Flag: Added
--trust <policy>/--policy <policy>and explanation output for selected trust policy. - VS Code Trust Setting: Added
glyphCompress.trustPolicyto the extension settings and wired it into compressor activation.
🔥 v1.7.0 (Provider-Aware Compression Profiles)
- Provider Compression Profiles: Added provider-specific compression profiles for
raw,openai,anthropic,gemini, andlocalmodel workflows. - Estimator-Guided Dynamic Dictionaries: Dynamic dictionary thresholds now adapt per provider so OpenAI/local profiles can be more compact while Anthropic stays more cache-stable.
- Source Map Profile Metadata: Source maps now include
providerandprofilemetadata, and dynamic entries record which provider strategy selected them. - CLI Provider Flag: Added
--provider <provider>so command-line compression can estimate and profile output for OpenAI, Anthropic, Gemini-compatible, local, or raw text targets. - Typed Public Profiles: TypeScript declarations now expose
ProviderCompressionProfileandPROVIDER_COMPRESSION_PROFILESfor downstream tooling.
🔥 v1.6.0 (AST-Like Code Block Source Spans)
- Code Block Token Maps: Minified and summarized code blocks now include
tokensmetadata for structural source tokens. - Top-Level AST Map: Added
sourceMap.astso downstream tools can inspect structural code spans without walking every code block. - Language-Aware Tokens: Tracks imports, exports, functions, classes, declarations, return/yield, package/use/using, visibility, and type markers across JS/TS, Python, Rust, Go, Java/C#, and C/C++ families.
- Typed AST Spans: TypeScript declarations now include
GlyphAstTokenSpan, andgetReversibleDictionaries()exposesastmetadata. - Release Metadata: Updated source maps, workspace codebooks, tests, README, roadmap, issue templates, npm metadata, and VS Code extension metadata for v1.6.0.
🔥 v1.5.0 (Privacy Firewall Mode)
- Opt-In Privacy Firewall: Added
privacyFirewall: true/privacy: trueto redact secrets and sensitive identifiers before prompt compression. - Safe Redaction Placeholders: API keys, tokens, secret assignments, emails, IP addresses, AWS keys, GitHub tokens, JWTs, and bearer tokens are replaced with stable placeholders such as
⟦SECRET_ASSIGNMENT_1⟧. - Non-Revealing Source Maps: Added
sourceMap.privacyentries with redaction kind, label, placeholder, line/column span, and short SHA-256 hash metadata without storing the raw secret. - CLI Privacy Flag: Added
--privacyso command-line compression can redact sensitive values before output, clipboard copy, or source-map printing. - Release Metadata: Updated source maps, workspace codebooks, tests, README, roadmap, issue templates, npm metadata, and VS Code extension metadata for v1.5.0.
🔥 v1.4.0 (Extension & Proxy Smoke Suites)
- VS Code Activation Smoke Test: Added a mocked VS Code host suite that verifies extension activation reaches ready state and registers every contributed command.
- Proxy Forwarding Smoke Test: Added a local proxy suite that confirms chat payload compression, glyph protocol injection, upstream path preservation, and corrected
content-lengthforwarding. - Extension CJS Loading Hardening: The VS Code extension now loads the CommonJS middleware artifact directly, preventing activation-path module format drift.
- Focused Test Scripts: Added
test:extensionandtest:proxy, and wired both intonpm testand release validation. - Release Metadata: Updated source maps, workspace codebooks, tests, README, roadmap, issue templates, npm metadata, and VS Code extension metadata for v1.4.0.
🔥 v1.3.0 (Semantic Source Map Spans)
- Line/Column Source Spans: Source map entries now include
span.startandspan.endwith line, column, and offset metadata for tracked replacements. - Symbol-Level Mappings: Added a
sourceMap.symbolsarray that maps generated glyphs back to their original prompt, tech name, file path, diagnostic, dynamic dictionary, or code block source. - Reversible Span Access:
getReversibleDictionaries()now exposessymbolsalongside files, dynamic entries, diagnostics, and code blocks. - Typed Source Maps: TypeScript declarations now include
GlyphSourcePosition,GlyphSourceSpan, andGlyphSymbolSpan. - Release Metadata: Updated source maps, workspace codebooks, tests, README, roadmap, and VS Code extension metadata for v1.3.0.
🔥 v1.2.0 (Provider Accuracy & Test Suites)
- Provider-Aware Token Estimates: Added OpenAI, Anthropic, Gemini-compatible, local-model, and raw text estimator profiles for more realistic savings metrics.
- Public Estimator API: Added
estimateProviderTokens(),compareTokenEstimates(),normalizeProvider(), andPROVIDER_TOKEN_PROFILESto the stable package exports. - Split Test Suites: Added focused
test:unit,test:cli,test:workspace,test:metadata, andtest:integrationscripts, withnpm testrunning the full suite runner. - Benchmark Alignment: The benchmark now uses provider-specific estimates for chat payloads instead of a single generic character heuristic.
- Release Metadata: Updated source maps, workspace codebooks, tests, README, roadmap, and VS Code extension metadata for v1.2.0.
🔥 v1.1.1 (License Hardening)
- AGPL-3.0-only Metadata: Root package, VS Code extension, and package lock metadata now use the more precise
AGPL-3.0-onlySPDX identifier. - Commercial License Gate: Added explicit commercial-use language clarifying that proprietary, hosted, SaaS, embedded, OEM, marketplace, or private redistribution rights require a separate written agreement.
- NOTICE and Licensing Policy: Added
NOTICEanddocs/licensing.mdso npm, GitHub, and enterprise reviewers see the licensing posture directly. - Contributor Safeguards: Added contribution licensing terms and a PR checklist item to preserve the dual-license model for future contributions.
🔥 v1.1.0 (Contributor & Release Hygiene)
- Contributor Guide: Added
CONTRIBUTING.mdwith setup, testing, documentation, and API stability expectations. - Release Documentation: Added
docs/release.mdanddocs/architecture.mdfor maintainers and technical reviewers. - GitHub Templates: Added issue templates for bugs, features, provider compatibility, benchmark submissions, and a PR checklist.
- Link Checking: Added
npm run check:linksand wired it into CI. - Release Metadata: Updated tests and package metadata to verify contributor hygiene assets.
🔥 v1.0.0 (Stable Platform)
- Stable Public API: The
GlyphCompressor, provider wrappers, source maps, workspace intelligence exports, CLI commands, and VS Code settings are documented as the stable1.xplatform surface. - TypeScript Declarations: Added package-level declarations for the middleware, source maps, workspace codebooks, intent detection, and repository doctor APIs.
- CI and Packaging Validation: Added GitHub Actions coverage for Node LTS tests, benchmarks, npm pack dry-runs, and VS Code extension packaging.
- Formal Governance Docs: Added security, privacy, and enterprise deployment documentation for production adoption.
- Lean npm Package: Added an explicit package allowlist so npm releases include runtime, docs, typings, and extension files without scratch artifacts.
🔥 v0.9.0 (Workspace Intelligence)
- Persistent Workspace Codebook: Added
glyph-compress inspectto scan supported project files and write.glyphcompress/codebook.jsonwith symbols, imports, diagnostics, owners, and git status. - Intent Detection: Detects common workflows such as fix error, review diff, implement feature, explain architecture, write tests, and optimize performance.
- Relevant File Selection: Ranks workspace files for a query so compressed context can focus on the files most likely to matter.
- Repository Health Commands: Added
glyph-compress doctorandglyph-compress benchmarkfor repo readiness and trust metrics from the CLI.
🔥 v0.8.0 (Reversible Compression & Source Maps)
- Source Map API:
compressText()andcompressMessages()now return asourceMapwith file refs, dynamic dictionary entries, diagnostics, code blocks, and replacements. - Reversible Dictionaries: Added
getReversibleDictionaries()for file paths, repeated identifiers, diagnostics, and summarized code blocks. - CLI Source Maps: Added
glyph-compress --source-mapto print source map JSON alongside compressed output. - Round-Trip Coverage: Added integration tests for source maps, dynamic dictionaries, CommonJS alignment, and CLI source-map output.
🔥 v0.7.0 (Trust & Measurement)
- Benchmark Harness: Added
npm run benchmarkto compare original and compressed payloads across raw text, OpenAI, Anthropic, Gemini-compatible, and ultra-mode fixtures. - Trust Metrics: The benchmark reports payload ratio, token savings, context fidelity score, edit success proxy, and hallucinated file references.
- CLI Explain Mode: Added
glyph-compress --explainto show level behavior, indexed file refs, dynamic dictionary entries, and detected compression changes. - Fixture Coverage: Added CLI trust-feature coverage to the integration suite.
🔥 v0.6.1 (Packaging & VS Code Hardening)
- Root API Alignment: The documented
GlyphCompressor,wrapOpenAI, andwrapAnthropicimports are now exported from the package root. - CommonJS Entry Point: Added the missing CommonJS package entry so
require('glyph-compress')works for CJS consumers. - VS Code Proxy Configuration: The extension proxy now respects
glyphCompress.targetApiUrlinstead of using a hardcoded provider URL. - Opt-In Workspace Rules: Automatic writes to
.cursorrulesand.github/copilot-instructions.mdare gated behindglyphCompress.autoUpdateWorkspaceRules.
For future release planning and repository improvement priorities, see the GlyphCompress Roadmap. For contribution, licensing, and operational guidance, see CONTRIBUTING.md, docs/licensing.md, docs/release.md, docs/architecture.md, SECURITY.md, PRIVACY.md, and ENTERPRISE.md.
📏 v0.7.0 Benchmark Snapshot
npm run benchmark currently reports an aggregate payload compression ratio of 2.1x, 53% token savings, 100% context fidelity score, 100% edit success proxy, and 0 hallucinated file references across representative fixtures.
🔥 v0.6.0 (Project "Rosetta")
- Adaptive Payload Dictionary (APD): Analyzes term frequency in real-time and maps the highest token-consuming strings (classes, functions, variables) to a dynamic Unicode "Rosetta Stone" on the fly.
- Semantic Context Elision (Blackout Algorithm): Intelligently analyzes user intent (e.g., "fix", "deploy"). The new
_elideIrrelevantContextfunction strips the bodies of unrelated functions across massive payloads ([✂]), keeping structural signatures while slashing token noise. - Prompt Caching for Anthropic: Automatic injection of
cache_control: { type: 'ephemeral' }into the heaviest blocks of context (dictionary and files) to minimize repeated token costs and latency for Claude users. - Indentation Minification: Converts spaces to tabs or strips them automatically to scale down structural byte and token counts before final compression.
⚡ Previous Highlights (v0.5.x & Below)
- Workspace Compression (VS Code & Antigravity): A brand new command
GlyphCompress: Compress Entire Workspacescans your entire project, removes boilerplate, and generates a single semantic map (Level: Ultra) in an unsaved tab! Perfect for feeding massive architectures to Claude or Antigravity. - Zero-Command Transparent Proxy: Intercept LLM API calls from your IDE (Continue, Cursor, Cline) automatically. No more shortcuts or copy-pasting—everything happens transparently in the background on
localhost:8080. - Universal Syntax Minification: The
aggressivecompression level now actively removes comments and blank lines for C-family (JS, TS, C#, Java, C++, Go, Rust), Python, Ruby, HTML, and CSS, slashing token counts drastically. - Google Gemini Native Support: The proxy seamlessly reroutes OpenAI-formatted requests to Gemini's official
v1beta/openaicompatible endpoints. - Persistent Telemetry: The VS Code extension tracks your Lifetime Savings across all sessions, showing exactly how many millions of tokens (and dollars) you've saved overall.
📊 Benchmarks
| Scenario | Original | Compressed | Ratio | Savings | |---|---|---|---|---| | Fix TypeScript error in React | 1,734 chars | 137 chars | 12.7x | 92% | | Optimize API endpoint | 1,999 chars | 195 chars | 10.3x | 90% | | Deploy to Kubernetes | 730 chars | 84 chars | 8.7x | 88% | | Debug Python ML pipeline | 1,925 chars | 249 chars | 7.7x | 87% | | Create React form | 116 chars | 33 chars | 3.5x | 72% | | Average | | | 9.3x | 89% |
🚀 Usage: Command Line (CLI)
You can run GlyphCompress directly from your terminal to quickly compress files for ChatGPT or Claude.
# Compress a Python/Rust/JS file and copy it to your clipboard
npx glyph-compress src/app.ts --level ultra --copy
# Check the built-in help
npx glyph-compress --help
# Explain what changed during compression
npx glyph-compress src/app.ts --level ultra --explain
# Print reversible source map metadata
npx glyph-compress src/app.ts --level ultra --source-map
# Redact secrets before printing or copying compressed output
npx glyph-compress .env --privacy --source-map
# Build a persistent workspace codebook and rank relevant files
npx glyph-compress inspect "fix AuthenticationManager error"
# Check repository readiness for GlyphCompress workflows
npx glyph-compress doctor
# Run benchmark metrics through the CLI
npx glyph-compress benchmarkCommand Line (CLI): Available Commands
npx glyph-compress [file|command] [options]| Command | Purpose | Example |
|---|---|---|
| [file] | Compress a single file and print the compressed payload plus the shared codebook. | npx glyph-compress src/app.ts |
| inspect [query] | Build .glyphcompress/codebook.json, detect intent, and rank relevant workspace files. | npx glyph-compress inspect "fix auth error" |
| doctor | Check repository readiness for GlyphCompress workflows. | npx glyph-compress doctor |
| benchmark | Run the benchmark harness from the current repository. | npx glyph-compress benchmark |
Command Line (CLI): Options
| Option | Values | Purpose | Example |
|---|---|---|---|
| -l, --level <level> | light, standard, aggressive, ultra | Select compression aggressiveness. Default: standard. | npx glyph-compress src/app.ts --level ultra |
| -c, --copy | flag | Copy compressed output to the system clipboard. | npx glyph-compress src/app.ts --copy |
| -x, --explain | flag | Print what was compressed, indexed, preserved, or transformed. | npx glyph-compress src/app.ts --explain |
| --source-map | flag | Print reversible source map JSON, including file refs, dynamic entries, diagnostics, symbols, AST/code block metadata, privacy metadata, provider metadata, and trust metadata. | npx glyph-compress src/app.ts --source-map |
| --privacy | flag | Redact common secrets and sensitive identifiers before compression/output. | npx glyph-compress .env --privacy --source-map |
| --provider <provider> | raw, openai, anthropic, gemini, local | Select provider-aware estimates and compression profile. Default: raw. | npx glyph-compress src/app.ts --provider openai --explain |
| --trust <policy> | lossless, reversible, privacy, lossy | Select allowed transformation policy. Default: auto. | npx glyph-compress src/app.ts --trust reversible --source-map |
| --policy <policy> | lossless, reversible, privacy, lossy | Alias for --trust. | npx glyph-compress src/app.ts --policy privacy |
| --json | flag | Print machine-readable JSON for supported commands such as inspect and doctor. | npx glyph-compress inspect "review diff" --json |
| -p, --proxy [port] | optional port | Start the Zero-Command Transparent Proxy. Default port: 8080. | npx glyph-compress --proxy 8080 |
| -h, --help | flag | Show built-in CLI help. | npx glyph-compress --help |
Command Line (CLI): Practical Examples
# Standard file compression
npx glyph-compress README.md
# Maximum compression for a TypeScript source file
npx glyph-compress src/app.ts --level ultra
# Provider-aware compression for OpenAI chat payloads
npx glyph-compress src/app.ts --provider openai --level standard --explain
# Anthropic/cache-stable profile with reversible source map metadata
npx glyph-compress src/app.ts --provider anthropic --trust reversible --source-map
# Exact-preservation mode: useful when you want metadata without transformations
npx glyph-compress src/app.ts --trust lossless --source-map
# Privacy-first mode for files that may contain secrets or customer data
npx glyph-compress .env --privacy --trust privacy --source-map
# JSON workspace inspection for automation or CI scripts
npx glyph-compress inspect "implement billing validation" --json
# Repository readiness check in JSON form
npx glyph-compress doctor --json
# Start the local OpenAI-compatible compression proxy
npx glyph-compress --proxy 8080Cost savings: ~$200/month at 50 requests/day with Claude Sonnet.
🚀 Quick Start
Standalone (any project)
import { GlyphCompressor } from 'glyph-compress';
const gc = new GlyphCompressor({ level: 'standard' });
const { compressed, stats, sourceMap } = gc.compressText(
"Fix the TypeScript error in src/components/UserProfile.tsx line 42: " +
"Property 'name' does not exist on type 'User'"
);
console.log(compressed);
// → "⺌✗ ◈₍1₎:42 'name'∉User"
console.log(stats);
// → { ratio: '5.5x', savedPct: '82%' }
console.log(sourceMap.files);
// → [{ ref: '◈₍1₎', path: 'src/components/UserProfile.tsx', domain: 'frontend' }]With OpenAI
import OpenAI from 'openai';
import { wrapOpenAI } from 'glyph-compress';
const client = wrapOpenAI(new OpenAI({ apiKey: process.env.OPENAI_API_KEY }));
// Every call is automatically compressed — the codebook is injected into the system prompt
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: 'You are a senior developer.' },
{ role: 'user', content: 'Fix the error in UserProfile.tsx' },
],
});With Anthropic Claude
import Anthropic from '@anthropic-ai/sdk';
import { wrapAnthropic } from 'glyph-compress';
const client = wrapAnthropic(new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }));
const response = await client.messages.create({
model: 'claude-sonnet-4-20250514',
system: 'You are a senior developer.',
messages: [
{ role: 'user', content: 'Fix the error in UserProfile.tsx' },
],
});With Antigravity (AI Coding Assistant)
For agentic IDEs like Antigravity, you can compress massive context payloads locally before passing them into the AI's prompt:
import { GlyphCompressor } from 'glyph-compress';
// Use "ultra" level to obliterate code bodies and comments into semantic summaries
const gc = new GlyphCompressor({ level: 'ultra' });
// 1. Inject this ONCE into your Antigravity System Prompt:
console.log(gc.getCodebookPrompt());
// 2. Compress and send massive files to Antigravity:
const { compressed, stats } = gc.compressText(massiveProjectContext);
console.log(compressed); // Send this to the LLM
console.log(stats); // → { ratio: '12.7x', savedPct: '92%' }VS Code Extension
- Install from the VS Code Marketplace with extension id
neolambo.glyph-compress. - For the exact latest GitHub release build, download
glyph-compress-<version>.vsixfrom GitHub Releases and install it locally:code.cmd --install-extension .\glyph-compress-1.9.3.vsix --force code.cmd --list-extensions --show-versions | Select-String -Pattern 'neolambo.glyph-compress' - See live compression stats in the status bar:
⚡ GC: 3.5x | -1200 tok
The Marketplace listing exists publicly; GitHub Releases are also published for users who need a specific VSIX version immediately after each release.
Zero-Friction Chat Integration (Copilot / Claude / Cursor)
GlyphCompress provides a fluid workflow for native IDE chats. The extension can optionally write workspace rules so Copilot and Cursor understand compressed glyph context.
The Magic Workflow:
- Optional Codebook Injection: Enable
glyphCompress.autoUpdateWorkspaceRulesto let GlyphCompress create/update.github/copilot-instructions.mdand.cursorrulesin your project root. Copilot and Cursor can then learn the Glyph dictionary from workspace rules. - One-Click Ask (
Ctrl+Alt+G): Highlight a massive chunk of code (or leave unselected to compress the whole file) and pressCtrl+Alt+G(or runGlyphCompress: Ask LLM (Auto-Compress)). - Seamless Chat: The extension instantly compresses the code and automatically opens your VS Code Chat with the compressed text pre-filled. Just type your question and hit enter! The AI will parse the
[imp:3 ƒ:2 34L]glyphs perfectly, saving you 90% of your context window.
Available Commands:
GlyphCompress: Ask LLM (Auto-Compress)(Ctrl+Alt+G) — Instantly compress and open VS Code ChatGlyphCompress: Copy System Codebook— Instantly copy instructions for any LLMGlyphCompress: Compress Selection— Compress code and auto-copy to clipboardGlyphCompress: Build Project Codebook— Index your workspace filesGlyphCompress: Toggle Compression On/OffGlyphCompress: Show Compression Stats— Dashboard with session statisticsGlyphCompress: Start Zero-Command Proxy— Start the local compression proxyGlyphCompress: Stop Zero-Command Proxy— Stop the local compression proxyGlyphCompress: Compress Entire Workspace— Generate a compressed workspace summary
Settings:
{
"glyphCompress.enabled": true,
"glyphCompress.provider": "auto", // "auto" | "raw" | "openai" | "anthropic" | "antigravity" | "gemini" | "local"
"glyphCompress.compressionLevel": "standard", // "light" | "standard" | "aggressive" | "ultra"
"glyphCompress.trustPolicy": "auto", // "auto" | "lossless" | "reversible" | "privacy" | "lossy"
"glyphCompress.showStatusBar": true,
"glyphCompress.autoUpdateWorkspaceRules": false,
"glyphCompress.targetApiUrl": "https://api.openai.com"
}👻 The Ultimate Magic: Zero-Command Transparent Proxy (v0.5.0+)
If you want 100% automatic, invisible compression without pressing any shortcuts, you can use the GlyphProxy. It intercepts the API calls made by your IDE, compresses the prompt on the fly, and saves your API tokens.
How to use the Proxy:
- Start the proxy server using the CLI or VS Code:
(Or from VS Code Command Palette:# From terminal npx glyph-compress --proxy 8080GlyphCompress: Start Zero-Command Proxy) - Configure your AI coding assistant to use the custom local endpoint:
- API Base URL / Override API URL:
http://localhost:8080/v1 - API Key: Your real OpenAI/Anthropic key
- API Base URL / Override API URL:
🛠️ Step-by-Step IDE Integration Guide
Cursor IDE
- Open Cursor Settings (
Ctrl+Shift+JorCmd+Shift+J). - Go to Models.
- Under OpenAI API Key, enter your real API key.
- Toggle Override OpenAI Base URL and set it to:
http://localhost:8080/v1 - Magic! All Chat and Cmd+K requests will now be silently compressed.
Cline / RooCode (VS Code Extensions)
- Open the Cline/RooCode settings panel.
- Select OpenAI Compatible as your API Provider.
- Base URL:
http://localhost:8080/v1 - API Key: Your real API key
- Model ID:
gpt-4o(or whichever you prefer).
Continue.dev
- Open
~/.continue/config.json. - Add or edit your model configuration:
{
"title": "GPT-4o (Glyph Proxy)",
"provider": "openai",
"model": "gpt-4o",
"apiKey": "YOUR_REAL_API_KEY",
"apiBase": "http://localhost:8080/v1"
}GitHub Copilot Chat
Note: Microsoft locks the API URL for the official Copilot extension for security reasons. To use GlyphCompress with the official Copilot, please use the Ctrl+Alt+G (One-Click Ask) shortcut provided by the GlyphCompress VS Code Extension.
3. Done!
You don't need to do anything else. When your IDE sends huge blocks of code to the LLM, the proxy intercepts the JSON request, minifies the code blocks, injects the codebook, and forwards the heavily compressed request to the real LLM API.
🔤 The Glyph Protocol
The system is built on 16 base radicals that encode fundamental semantic dimensions:
DOMAINS: ◈ Frontend ◉ AI/ML ◊ DevOps ◆ Database
◇ Language ⊕ Auto ⊗ Arch ⊙ Mobile
⊘ Cloud ⊚ Data ⊛ Testing ⊜ Backend
⊝ Security ⊞ Docs ⊟ Perf ⊠ Network
ACTIONS: ▲ Create ▼ Analyze ► Test ◄ Monitor
■ Document □ Connect ▪ Deploy ▫ Optimize
● Transform ○ Protect
TECH: ᵗ TypeScript ᵖ Python ʳ Rust ℜ React
ℕ Next.js 𝒟 Docker 𝒦 K8s ℙ Postgres
STRUCTURE: ✗ Error ⚠ Warning ∉ Type mismatch ∅ Not found
→ Returns ƒ Function 𝒞 Class ◇ State ⟿ EffectCompression Levels
| Level | What it compresses | Use case | |---|---|---| | light | Prompt patterns, tech names | Low-risk, minimal changes | | standard | Prompt patterns, tech names, file paths, diagnostics, repeated identifiers | Default coding assistant payloads | | aggressive | Standard compression plus multi-language syntax minification inside code blocks | Debugging or review where code structure still matters | | ultra | Aggressive compression plus architectural code summaries and redundancy stripping | Maximum context savings when inner code logic is less important |
Use sourceMap or --source-map whenever you need to inspect or reverse the compressed references after the payload is sent.
🏗️ Architecture
+------------------+ +--------------------+ +-------------+
| IDE / Tool |---->| GlyphCompress |---->| LLM API |
| | | | | |
| VS Code | | 1. Index files | | OpenAI |
| Antigravity | | 2. Compress ctx | | Claude |
| CLI script | | 3. Inject codebook | | Gemini |
| Custom app | | 4. Track stats | | |
+------------------+ +--------------------+ +-------------+The codebook (~150 tokens) is injected once into the system prompt. The LLM learns to decode the glyphs from it and responds normally in natural language.
📦 Project Structure
glyph-compress/
├── src/
│ ├── index.js # Library entry point (ESM)
│ ├── index.d.ts # Stable TypeScript declarations
│ ├── workspace-intelligence.js # Workspace codebook, intent detection, and file ranking
│ ├── radical-alphabet.js # 96 symbols: radicals + glyphs
│ ├── compressor.js # Multi-level compression engine
│ └── system-prompt-generator.js# Codebook system prompt generator
├── vscode-ext/
│ ├── package.json # VS Code extension manifest
│ ├── extension.js # Extension activation & commands
│ └── glyph-middleware.js # Core middleware (OpenAI/Claude/Antigravity)
├── test/
│ ├── run-suites.js # Runs focused test suites
│ ├── unit.js # Core compressor and estimator checks
│ ├── cli.js # CLI explain/source-map smoke checks
│ ├── workspace.js # Workspace intelligence smoke checks
│ ├── metadata.js # Package/docs metadata checks
│ ├── benchmark.js # Trust and measurement benchmark harness
│ └── integration.js # 41 legacy integration checks
├── examples/
│ ├── openai-example.js # OpenAI usage example
│ └── claude-example.js # Claude usage example
├── package.json
├── SECURITY.md
├── PRIVACY.md
├── ENTERPRISE.md
├── COMMERCIAL_LICENSE.md
├── NOTICE
├── LICENSE
├── ROADMAP.md
└── README.md🧪 Tests
# Run all test suites
npm test
# Run focused suites
npm run test:unit
npm run test:cli
npm run test:workspace
npm run test:extension
npm run test:proxy
npm run test:metadata
npm run test:integration
# Run the stable release validation bundle
npm run check
# Check local Markdown links
npm run check:links
# Run trust and measurement benchmark
npm run benchmark
# Run interactive demo
npm run demo🔬 Theory
GlyphCompress is grounded in information theory:
- Shannon entropy tells us the theoretical compression limit for character-level encoding
- Kolmogorov complexity tells us that compression = understanding
- Semantic compression captures structural redundancy that standard algorithms (GZIP, Brotli) miss
The key insight: development communication is highly structured — the same patterns (fix error, deploy to, create component) repeat thousands of times with different parameters. By encoding these patterns as composable radicals, we achieve compression ratios far beyond what byte-level algorithms can reach.
Fundamental Law: Perfect compression is equivalent to perfect understanding. Information is redistributed — not lost — among the message, the codebook, and the receiver's context.
📜 Version History (Changelog)
v1.9.3 (Proxy Diagnostics Hotfix)
- Upstream Status Visibility: Proxy forwarding now logs upstream HTTP status codes for successful and failed provider responses.
- Safe Error Diagnostics: Upstream error bodies are logged with bearer tokens and API key fields redacted.
- Stream Completion Logs: Successful responses log completed byte counts, and early client disconnects are flagged for Continue/Cursor-style debugging.
v1.9.0 (Proxy and Packaging Hardening)
- Proxy Options Preserved: CLI and VS Code proxy startup now pass provider, trust policy, privacy, and target API options into the compressor.
- Gemini-Compatible Forwarding: OpenAI-compatible
/v1/*requests are mapped to Gemini's/v1beta/openai/*route when the target is Google Generative Language. - ESM Export Cleanup: Public ESM middleware import now goes through
src/glyph-middleware.js, avoiding Node warnings caused by importing ESM from the VS Code package scope. - Focused npm Tarball: The package allowlist now includes runtime files and essential docs without publishing broad outreach drafts or demo-generation scripts.
- VS Code Hardening: Proxy startup, status-bar toggling, and interval disposal were tightened for a cleaner extension lifecycle.
v1.8.0 (Safe Compression Trust Policies)
- Explicit Trust Policies: Added
TRUST_POLICY_PROFILESforlossless,reversible,privacy, andlossymodes. - Policy-Enforced Transformations: Lossless mode preserves input text, reversible mode blocks code minification/summaries, privacy mode redacts sensitive values, and lossy mode allows aggressive/ultra transformations.
- Source Map Trust Metadata: Added
sourceMap.trustPolicyandsourceMap.trustfor audit-friendly downstream tooling. - CLI and VS Code Controls: Added CLI
--trust <policy>/--policy <policy>and VS CodeglyphCompress.trustPolicysetting. - Release Metadata: Updated source map, workspace codebook, benchmark, README, roadmap, issue template, npm, and VS Code extension versions to
1.8.0.
v1.7.0 (Provider-Aware Compression Profiles)
- Provider Compression Profiles: Added
PROVIDER_COMPRESSION_PROFILESfor raw, OpenAI, Anthropic, Gemini-compatible, and local-model targets. - Estimator-Guided Thresholds: Dynamic dictionary selection now uses provider-specific savings thresholds and dictionary caps.
- Source Map Metadata: Added top-level
sourceMap.providerandsourceMap.profile, with provider/profile metadata on dynamic entries. - CLI Provider Selection: Added
--provider <provider>and explanation output for the selected provider/profile strategy. - Release Metadata: Updated source map, workspace codebook, benchmark, README, roadmap, issue template, npm, and VS Code extension versions to
1.7.0.
v1.6.0 (AST-Like Code Block Source Spans)
- Code Block Token Maps: Added
codeBlocks[].tokensentries for structural tokens inside aggressive minified and ultra summarized code blocks. - Top-Level AST Map: Added
sourceMap.astwith span metadata and block mode for fast inspection by debugging, explain, and editor workflows. - Language-Aware Coverage: Tracks imports, exports, functions, classes, declarations, return/yield, package/use/using, visibility, and type markers across common language families.
- TypeScript Declarations: Added
GlyphAstTokenSpanand exposedastthroughgetReversibleDictionaries(). - Release Metadata: Updated source map, workspace codebook, benchmark, README, roadmap, issue template, npm, and VS Code extension versions to
1.6.0.
v1.5.0 (Privacy Firewall Mode)
- Opt-In Redaction: Added
privacyFirewall: true/privacy: truefor library consumers and--privacyfor the CLI. - Sensitive Pattern Coverage: Redacts common API keys, secret assignments, bearer tokens, JWTs, GitHub tokens, AWS access keys, emails, and IPv4 addresses before compression.
- Safe Source Map Metadata: Added
sourceMap.privacywith placeholder, kind, label, span, and short SHA-256 hash metadata without retaining raw sensitive values. - Reversible Dictionary Access: Added privacy redaction entries to
getReversibleDictionaries()for inspection workflows. - Release Metadata: Updated source map, workspace codebook, benchmark, README, roadmap, issue template, npm, and VS Code extension versions to
1.5.0.
v1.4.0 (Extension & Proxy Smoke Suites)
- VS Code Activation Coverage: Added a mocked VS Code extension host smoke suite that checks activation, command registration, output logging, and subscription tracking.
- Proxy Coverage: Added a local proxy smoke suite that stubs upstream HTTPS, verifies compressed chat forwarding, preserves OpenAI-compatible paths, and checks corrected
content-length. - Activation Hardening: Updated the extension activation path to require
glyph-middleware.cjs, keeping VS Code's CommonJS host aligned with the packaged middleware artifact. - Focused Scripts: Added
npm run test:extensionandnpm run test:proxy, and included both in the full suite runner. - Release Metadata: Updated source map, workspace codebook, benchmark, README, roadmap, issue template, npm, and VS Code extension versions to
1.4.0.
v1.3.0 (Semantic Source Map Spans)
- Line/Column Ranges: Added span metadata with line, column, and offset positions for prompt, tech, file, diagnostic, dynamic dictionary, and code block mappings.
- Symbol-Level Source Maps: Added
sourceMap.symbolsto map emitted glyphs back to their original source text and replacement kind. - Reversible Dictionaries: Added symbol spans to
getReversibleDictionaries()for downstream inspection and debugging workflows. - TypeScript Declarations: Added typed source position, source span, and symbol span interfaces.
- Integration Coverage: Expanded integration coverage to 41 checks with multi-line span assertions.
v1.2.0 (Provider Accuracy & Test Suites)
- Provider-Aware Estimates: Added reusable token estimator profiles for raw text, OpenAI, Anthropic, Gemini-compatible, and local-model payloads.
- Public API Exports: Exposed
estimateProviderTokens(),compareTokenEstimates(),normalizeProvider(), andPROVIDER_TOKEN_PROFILESfrom ESM, CommonJS, and TypeScript declarations. - Benchmark Accuracy: Updated the benchmark to use provider-specific estimates for chat-style fixtures.
- Split Test Suites: Added focused unit, CLI, workspace, metadata, and integration suite scripts, with
npm testorchestrating all suites. - Integration Coverage: Expanded integration metadata coverage to 40 checks.
v1.1.1 (License Hardening)
- Precise SPDX Metadata: Updated npm and VS Code extension metadata to
AGPL-3.0-only. - Commercial Gate: Reworked commercial licensing language to state that proprietary, hosted, SaaS, embedded, OEM, marketplace, or private redistribution rights require a separate written agreement.
- NOTICE and Policy Docs: Added
NOTICEanddocs/licensing.md, and included them in npm packaging. - Contributor Terms: Added contribution licensing terms and a pull request checklist item to protect future dual licensing.
v1.1.0 (Contributor & Release Hygiene)
- Contributor Guide: Added setup, testing, documentation style, public API, and release-process guidance.
- Release and Architecture Docs: Added focused maintainer checklists and architecture notes under
docs/. - GitHub Templates: Added bug, feature, provider compatibility, benchmark submission, and pull request templates.
- Link Checking: Added
npm run check:linksand CI coverage for local Markdown links. - Integration Coverage: Added contributor hygiene metadata checks to the integration suite.
v1.0.0 (Stable Platform)
- Stable API Surface: Documented and typed the public
GlyphCompressor, provider wrappers, source maps, workspace intelligence helpers, and CLI workflows. - TypeScript Declarations: Added
src/index.d.tsand middleware subpath declarations for editor and package consumer support. - CI Validation: Added GitHub Actions for Node 20/22 tests, benchmarks, npm pack dry-runs, and VS Code extension packaging.
- Formal Docs: Added security, privacy, and enterprise deployment documents for production adoption.
- Packaging Hygiene: Added a package allowlist to avoid publishing generated codebooks, scratch files, historical VSIX files, and unnecessary assets.
v0.9.0 (Workspace Intelligence)
- Persistent Codebook: Added workspace scanning and
.glyphcompress/codebook.jsonoutput with files, symbols, imports, diagnostics, owners, and git context. - Intent Detection: Added workflow detection for fix error, review diff, implement feature, explain architecture, write tests, and optimize performance.
- Relevant File Ranking: Added query-aware file selection so future compression can focus on relevant files by default.
- CLI Commands: Added
inspect,doctor, andbenchmarkcommands with JSON support for automation. - Integration Coverage: Added workspace intelligence, codebook persistence, doctor, CLI inspect, and intent-detection tests.
v0.8.0 (Reversible Compression & Source Maps)
- Source Map API:
compressText()andcompressMessages()return asourceMapobject for files, dynamic identifiers, diagnostics, code blocks, and replacements. - Reversible Dictionaries: Added
getSourceMap()andgetReversibleDictionaries()to inspect mappings after compression. - CLI Source Maps: Added
--source-mapto print reversible source map metadata from the CLI. - CommonJS Alignment: Regenerated the CommonJS middleware so
require('glyph-compress')consumers receive the same source map behavior. - Integration Coverage: Added source map, dynamic dictionary, and CLI source-map tests.
v0.7.0 (Trust & Measurement)
- Benchmark Harness: Added
test/benchmark.jsandnpm run benchmarkfor representative raw, OpenAI, Anthropic, Gemini-compatible, and ultra-mode payloads. - Trust Metrics: Reports payload compression ratio, token savings, context fidelity score, edit success proxy, and hallucinated file references.
- CLI Explain Mode: Added
--explain/-xto show compression behavior, indexed file refs, dynamic dictionary entries, and detected changes. - Integration Coverage: Added CLI explain coverage to the integration test suite.
v0.6.1 (Packaging & VS Code Hardening)
- Root API Alignment: Exported
GlyphCompressor,wrapOpenAI,wrapAnthropic, andCODEBOOK_PROMPTfrom the package root to match the README examples. - CommonJS Entry Point: Added
src/index.cjsso the declaredrequireentry works for CommonJS consumers. - VS Code Extension Fixes: The proxy now uses
glyphCompress.targetApiUrl, workspace rule injection is opt-in,ultrais exposed in settings, and the extension test script points to the existing integration suite.
v0.6.0 (Project "Rosetta")
- Adaptive Payload Dictionary (APD): Introduced a real-time frequency analyzer that identifies and maps the heaviest token-consuming strings to a dynamic Unicode dictionary.
- Semantic Context Elision (Blackout Algorithm): Implemented
_elideIrrelevantContextto intelligently strip out unrelated function bodies based on the intent of the user query. - Anthropic Prompt Caching: Auto-injects
cache_control: { type: 'ephemeral' }into heavily weighted blocks for Claude optimization. - Indentation Minification: Added an explicit layer to minimize spaces to tabs for all structural context blocks.
v0.5.1 (Universal Minification & Gemini Integration)
- Universal Minification: Expanded the
aggressiveminification to aggressively remove comments (//,/* */,<!-- -->,#) and empty lines across all supported languages (C-family, Python, Ruby, Web markup, CSS, etc.). - Gemini Compatibility: Enhanced the zero-command proxy to dynamically route standard OpenAI requests (
/v1/) to Google Gemini's official OpenAI-compatible endpoint (/v1beta/openai/).
v0.5.0 (Zero-Command Transparent Proxy)
- Invisible Proxy Middleware: Added
src/proxy.js, a local HTTP server that intercepts OpenAI-compatible API requests. - True Zero Commands: Configured your IDE's API Base URL to point to
localhost:8080, and GlyphCompress automatically intercepts, parses, and minifies your code blocks before they hit the real API. - Added Proxy start/stop commands in both CLI (
--proxy) and VS Code Extension.
v0.4.0 (Multi-Language Syntax Minification)
- Intelligent Minification: Upgraded the
aggressivecompression level. Instead of destructively summarizing code blocks, it now applies intelligent syntax minification to preserve logic and structure for debugging. - Broad Language Support: Added targeted RegEx parsing for C, C++, Python, Java, C#, Rust, Go, JavaScript, and TypeScript.
- Enhanced Codebook: Expanded the glyph dictionary to include universal concepts like variables (
◇), returns (→), and types (◇t).
v0.3.6 (Zero-Friction Base)
- True Zero-Friction UX: The extension now automatically creates and updates
.cursorrulesand.github/copilot-instructions.mdwith the dynamic codebook, teaching AI assistants the semantic dictionary completely in the background. - One-Click Ask (
Ctrl+Alt+G): Added a new command to instantly compress the current file/selection and automatically open the native VS Code Chat sidebar, eliminating all copy-paste steps.
v0.3.4 (Zero-Friction Base)
- Zero-Friction LLM Chat Integration: Added
GlyphCompress: Copy System Codebookcommand. You can now instantly copy the codebook to your clipboard and paste it into Copilot/Claude Chat custom instructions, making GlyphCompress seamlessly interoperable with any built-in IDE chat.
v0.3.3 (VS Code Selection Fix)
- VS Code Extension Fix: The
Compress Selectioncommand now automatically detects the editor language and wraps raw text in markdown backticks, ensuring the "Ultra" semantic compressor triggers correctly for code snippets.
v0.3.2 (Monetization & Legal)
- Monetization & Legal: Migrated to Dual Licensing model (AGPL-3.0-only for open source, Enterprise for commercial).
- Marketplace Publishing: Added official support and documentation for the Visual Studio Code Marketplace.
- Funding Support: Enabled GitHub Sponsors and NPM funding links natively.
v0.3.0 & v0.3.1 (Next-Gen Features)
- Global CLI Tool (
npx glyph-compress): Added the ability to compress and copy code directly from your terminal. - Multi-Language Ultra Parser: Extended the "Ultra" semantic codeblock compressor to support Python, Rust, Go, Java, and C# natively.
- Persistent Telemetry: Added
globalStatetracking in VS Code to calculate Lifetime Savings across all sessions.
v0.2.0 (Advanced Edition)
- Dynamic Dictionary (Auto-Tuning): Implemented runtime frequency analysis to map repeated long variable/class names to single greek letters (
α,β). - "Ultra" Compression Level: Introduced lossy semantic stripping that completely removes
console.log()calls and inline/block comments before compression. - Anthropic Prompt Caching: Added native support for Claude's
cache_control: { type: 'ephemeral' }to drastically reduce the codebook cost in long chat sessions. - Antigravity Support: Verified full compatibility with the Google Antigravity platform.
v0.1.0 (Initial Release)
- Glyph Protocol v0.1: Defined the core 96-symbol dictionary mapping tech stacks, domains, and common actions to Unicode symbols.
- Codeblock Summarizer: Introduced the
[imp:3 ƒ:2 44L]structural summary format for code blocks. - Middleware API: Created wrappers for OpenAI and Anthropic SDKs to automatically inject the codebook and compress user messages.
⚖️ Dual Licensing Model
GlyphCompress is distributed under a dual-license model:
- Open source: AGPL-3.0-only. The public repository and npm package may be used under the AGPL-3.0-only terms in LICENSE. If you modify, integrate, redistribute, or offer GlyphCompress over a network, make sure you can satisfy the AGPL obligations.
- Commercial license. Proprietary, closed-source, private redistribution, SaaS, hosted, embedded, OEM, marketplace, or enterprise use without AGPL obligations requires a separate written commercial agreement. Downloading, installing, forking, importing, or bundling the package does not grant commercial rights.
See COMMERCIAL_LICENSE.md, docs/licensing.md, and NOTICE for the project licensing position. For commercial terms, contact [email protected].
🤝 Contributing
Contributions welcome! Areas of interest:
- New radicals for emerging technologies
- Language support for non-English prompts
- VS Code Marketplace metadata, examples, and compatibility reports
- Benchmark data from real-world IDE sessions
- LLM comprehension tests with different models
By submitting a contribution, you confirm that it can be used under the project dual-license model described in CONTRIBUTING.md.
