yalc-gtm-os
v0.12.0
Published
YALC — open-source, AI-native GTM operating system. CLI-first engine for lead finding, enrichment, qualification, and campaign orchestration.
Maintainers
Readme
YALC — The Open-Source GTM Operating System
AI plans your campaigns, qualifies your leads, and learns from every interaction.
YALC is an open-source, AI-native operating system for running any GTM campaign. CLI-first. Intelligence compounds from every interaction.
Quick Start
Node.js 20 or higher required (nodejs.org).
npm install -g yalc-gtm-os
yalc-gtm startThat is the whole installation. The start command writes its config to ~/.gtm-os/, asks for your company website URL, then opens the browser at /setup/review so you can confirm the inferred framework. No long terminal interview by default.
YALC is a CLI; there's no public Node API to import.
Updating
npm update -g yalc-gtm-osFrom source (contributors)
If you want to hack on YALC itself, clone the repo and link locally:
git clone https://github.com/Othmane-Khadri/YALC-the-GTM-operating-system.git
cd YALC-the-GTM-operating-system
corepack enable && corepack prepare pnpm@latest --activate
pnpm install
pnpm link --globalIf pnpm link --global fails with ERR_PNPM_NO_GLOBAL_BIN_DIR (or you are on Windows), run YALC in-repo with pnpm cli start instead.
What yalc-gtm start does:
- Prompts you for your company website URL (one question, that is it).
- Scaffolds
~/.gtm-os/and writes a.envtemplate the first time it runs. - Scrapes the website, runs synthesis, and stages a draft framework into
_preview/. - Spawns a local dashboard server on port 3847 and opens
http://localhost:3847/setup/reviewin your browser so you can confirm or edit each section before committing.
High-confidence sections auto-commit; low-confidence ones queue at /setup/review for your sign-off. You can re-run any section from the SPA or via yalc-gtm start --regenerate <section>.
If you do not have an ANTHROPIC_API_KEY set, the framework synthesis steps are skipped — you can add the key later and re-run yalc-gtm onboard then yalc-gtm configure. Inside Claude Code, the parent session provides the LLM + WebFetch, so no Anthropic / Firecrawl keys are required.
Prefer a terminal interview?
If you would rather walk through the legacy 4-step terminal interview instead of opening the browser, pass --review-in-chat:
yalc-gtm start --review-in-chatAfter Setup
# Easiest: describe what you want in natural language and let YALC plan the work
yalc-gtm orchestrate "find 10 SaaS CTOs matching my ICP and qualify them"
# Create a campaign
yalc-gtm campaign:create --title "Q2 Outbound" --hypothesis "VP Eng responds to pain-point messaging"
# Track campaign progress
yalc-gtm campaign:track --dry-run
# Or qualify a lead list you already have (CSV or JSON)
yalc-gtm leads:qualify --source csv --input ./your-leads.csv --dry-run
# Send via a non-default email provider (e.g. Brevo via the MCP template)
yalc-gtm email:send --provider brevo --to [email protected] --body "Hi there"Non-Interactive Setup
For CI or automation, set your keys in ~/.gtm-os/.env (or .env.local in your project) and run:
yalc-gtm start --non-interactiveA minimal env file looks like:
ANTHROPIC_API_KEY=sk-ant-...
UNIPILE_API_KEY=...
UNIPILE_DSN=https://api{N}.unipile.com:{PORT}
NOTION_API_KEY=secret_...
ENCRYPTION_KEY=$(openssl rand -hex 32)Recommended workflow: drive YALC from your IDE chat
YALC is designed to be driven by an AI assistant — Claude Code, Cursor, Copilot, or whatever you have open. Once it's installed globally, you don't need to remember commands. You just talk to your assistant.
Typical flow inside Cursor or VS Code with Claude Code:
- Install once:
npm i -g yalc-gtm-os. - Open your IDE and ask in plain language: "Set up YALC for my company, then find 10 SaaS CTOs and qualify them."
- The assistant runs the commands. Interactive prompts from
startshow up in the same chat panel; you answer them inline.
Every command also works directly in a terminal if you prefer that style. The "Using YALC from Claude Code" section below has the details on how Claude Code integrates with YALC's commands and how the LLM hand-off works when no ANTHROPIC_API_KEY is set.
Features at a Glance
- 20 built-in skills — qualify, scrape, campaign, orchestrate, personalize, competitive-intel, and more
- 7 providers — Unipile, Crustdata, Firecrawl, Notion, FullEnrich, Instantly, Anthropic
- Multi-channel campaigns — LinkedIn + Email with A/B variant testing
- Intelligence store — learns from every campaign outcome (hypothesis → validated → proven)
- Statistical significance — chi-squared testing to pick variant winners
- Campaign dashboard — real-time analytics, funnel views, Claude-powered Q&A
- Rate limiting — DB-backed token bucket on all external sends
- Outbound validation — every message checked before send, hard blocks on violations
- Background agents — launchd-integrated for automated campaign tracking
- Natural language orchestration — describe what you want, YALC plans the workflow
- Swappable email providers — Instantly built in, plus drop-in MCP templates for Brevo, Mailgun, and SendGrid (
provider:add --mcp <name>thenemail:send --provider <name>)
Using YALC from Claude Code (IDE or Terminal)
YALC works the same whether you run it from a coding IDE (VS Code, Cursor) or a standalone terminal. The CLI uses the same interactive prompts in both.
IDE (VS Code / Cursor with Claude Code extension):
You can ask Claude Code to run commands for you. For the initial setup, it's better to run yalc-gtm start yourself in the integrated terminal so you can answer the interactive prompts. After that, Claude Code can run any YALC command on your behalf — qualifying leads, creating campaigns, tracking results.
If your ANTHROPIC_API_KEY is already in your environment (common in Claude Code sessions), the start command detects it automatically and skips the prompt.
Terminal (standalone): Run commands directly. The interactive prompts work as expected in any terminal emulator.
Running YALC inside Claude Code (no extra keys required)
When YALC detects a parent Claude Code session — via CLAUDECODE, CLAUDE_CODE_ENTRYPOINT, or CLAUDE_CODE_SSE_PORT env vars set by Claude Code itself — both the Anthropic and Firecrawl keys become optional:
- The parent CC session already provides LLM reasoning, so a separate Anthropic API key isn't needed for ad-hoc planning, qualification, or personalization (just ask Claude Code).
- Claude Code's built-in
WebFetchtool covers single-URL scrapes, so Firecrawl is only needed for JS-rendered pages, multi-page crawls, or web search. - Claude Code's
WebSearchis also honored when onboarding needs to discover a company URL — if you skip the website prompt, YALC asks the parent CC session to run aWebSearchfor"<company> official website"instead of calling Firecrawl.
What works in Claude Code mode with zero provider keys:
| Command | Works? | Notes |
|---|---|---|
| start | ✓ | Steps 1–2 complete; Steps 3–4 (framework synth, goals) are skipped with a "come back after adding ANTHROPIC_API_KEY" message |
| leads:import | ✓ | Pure CSV/JSON ingest, no LLM |
| campaign:create (with --title + --hypothesis) | ✓ | LLM is only used for the optional auto-plan path |
| campaign:track, campaign:schedule, campaign:report (data-only) | ✓ | Pure CRUD against Notion / DB |
| notion:sync, notion:bootstrap | ✓ | |
| email:send, email:status | ✓ | Sends pre-written copy via Instantly |
| orchestrate, leads:qualify, personalize, competitive-intel | Redirect | Prints a message and exits cleanly without doing the work. Re-issue the request inside a Claude Code session so the parent LLM runs it, or add an ANTHROPIC_API_KEY and run again standalone. |
When you DO still want an Anthropic key:
- Running YALC standalone (no parent CC session)
- Running YALC under cron, launchd, CI, or any unattended scheduler
- You want the qualifier / personalizer / orchestrator to run autonomously without you babysitting it from a CC chat
Web-fetch provider override — set WEB_FETCH_PROVIDER in .env.local:
auto(default) — use Firecrawl if present, otherwise hand off to Claude Code's WebFetchfirecrawl— force Firecrawl, error if no keyclaude-code— never call Firecrawl; commands that need a web fetch will emit a "fetch this URL with WebFetch and re-run with--input <file>" handoff
File Structure — Where Things Live:
~/.gtm-os/ Your GTM brain (persists across projects)
├── config.yaml Provider settings, Notion IDs, rate limits
├── framework.yaml GTM framework — ICP, positioning, signals
├── qualification_rules.md Lead qualification patterns (auto-generated)
├── campaign_templates.yaml Outreach copy templates (auto-generated)
├── search_queries.txt Monitoring keywords (auto-generated)
├── logs/agents/ Background agent run logs (JSON per run)
└── tenants/<slug>/ Per-tenant overrides (multi-company mode)
./data/ Working data (in your project directory)
├── leads/ CSV/JSON lead lists for qualification
├── intelligence/ Campaign learnings and insights
└── campaigns/ Campaign exports and reportsWhen talking to Claude Code, reference these locations directly:
- "Update my qualification rules" → edits
~/.gtm-os/qualification_rules.md - "Add a segment to my framework" → edits
~/.gtm-os/framework.yaml - "Qualify leads from this CSV" → reads from
./data/leads/
Architecture
┌──────────────────────────────────────────────────────────┐
│ CLI Layer │
│ campaign:track · campaign:create · leads:qualify · ... │
├──────────────────────────────────────────────────────────┤
│ Skills Layer │
│ qualify · scrape-linkedin · answer-comments · email · │
│ orchestrate · visualize · monthly-report │
├──────────────────────────────────────────────────────────┤
│ Providers Layer │
│ Unipile · Crustdata · Firecrawl · Notion · FullEnrich │
├──────────────────────────────────────────────────────────┤
│ Services Layer │
│ API wrappers · Rate limiter · Outbound validator │
├──────────────────────────────────────────────────────────┤
│ Data Layer │
│ Drizzle ORM · SQLite/Turso · Intelligence Store │
└──────────────────────────────────────────────────────────┘Three-layer pattern: Service (API wrapper) → Provider (StepExecutor) → Skill (user-facing operation). Never skip layers.
Providers
YALC ships providers in two forms: built-in TypeScript adapters (compiled into the package) and bundled declarative manifests (YAML under configs/adapters/). Both surface through the same capability registry. Run yalc-gtm adapters:list for the live view.
Built-in TypeScript adapters
| Provider | Capabilities | Env Var |
|----------|-------------|---------|
| Unipile | LinkedIn search, connections, DMs, scraping | UNIPILE_API_KEY, UNIPILE_DSN |
| Crustdata | Company/people search, enrichment | CRUSTDATA_API_KEY |
| Firecrawl | Web scraping, search (optional inside Claude Code) | FIRECRAWL_API_KEY |
| Notion | Database sync, page management | NOTION_API_KEY |
| FullEnrich | Email/phone enrichment | FULLENRICH_API_KEY |
| Instantly | Cold email sending, sequence management | INSTANTLY_API_KEY |
| Anthropic | AI planning, qualification, personalization (optional inside Claude Code) | ANTHROPIC_API_KEY |
| Voyage | Embeddings (memory) | VOYAGE_API_KEY |
| Playwright | Asset rendering (PDF/PNG from HTML) — optional dep | pnpm add playwright |
Note on
asset-rendering: theplaywrightprovider is shipped as an optional dependency. HTML rendering always works; PDF/PNG rendering kicks in once youpnpm add playwright && npx playwright install chromium.
Bundled declarative adapters
| Capability | Provider | Env Var | Manifest |
|---|---|---|---|
| people-enrich | peopledatalabs | PEOPLEDATALABS_API_KEY | configs/adapters/people-enrich-peopledatalabs.yaml |
| crm-contact-upsert | hubspot | HUBSPOT_API_KEY | configs/adapters/crm-contact-upsert-hubspot.yaml |
| email-campaign-create | brevo | BREVO_API_KEY | configs/adapters/email-campaign-create-brevo.yaml |
| landing-page-deploy | vercel | VERCEL_TOKEN (+ optional VERCEL_TEAM_ID) | configs/adapters/landing-page-deploy-vercel.yaml |
Counts at this commit: 20 built-in TypeScript adapters across 18 capabilities; 4 bundled declarative manifests. See docs/providers.md for setup and override semantics.
Skills
| Skill | Category | Description |
|-------|----------|-------------|
| qualify-leads | data | 7-gate lead qualification pipeline |
| scrape-linkedin | data | Scrape post engagers (likers/commenters) |
| answer-comments | outreach | Reply to LinkedIn post comments |
| email-sequence | content | Generate email drip sequences |
| visualize-campaigns | analysis | Campaign dashboards |
| monthly-campaign-report | analysis | Cross-campaign intelligence report |
| orchestrate | integration | Multi-step workflow from natural language |
CLI Commands
start Guided onboarding — keys, context, framework, goals in one flow
setup Check API keys and provider connectivity
onboard Build GTM framework from profile/website
campaign:track Poll Unipile, advance sequences, sync Notion
campaign:create Create campaign with A/B variant testing
campaign:report Generate weekly intelligence report
campaign:monthly-report Cross-campaign monthly report
campaign:dashboard Open visualization dashboard
leads:qualify Run 7-gate qualification pipeline
leads:scrape-post Scrape LinkedIn post engagers
leads:import Import leads from CSV/JSON/Notion
linkedin:answer-comments Reply to LinkedIn post comments
email:send Send a sequence or single message (pick the email provider with --provider <name>)
email:create-sequence Generate email drip sequence
notion:sync Bidirectional SQLite ↔ Notion sync
notion:bootstrap Import existing Notion data to SQLite
orchestrate Natural language → phased skill execution
agent:run Run background agent immediately
agent:install Install agent as launchd service
agent:list List agents with last run statusThe listing above covers the common commands. The full surface also includes the crm:* (CRM sync and import), email:* (send, accounts, status), provider:*, memory:*, context:*, pipeline:*, skills:*, and tenant:* families, plus configure, doctor, update, personalize, competitive-intel, test-run, and campaign:schedule. Run yalc-gtm --help for the complete list.
All commands that send or write support --dry-run. See Command Reference for full details, flags, and examples.
Documentation
| Guide | What it covers |
|-------|---------------|
| First Run Tutorial | Step-by-step walkthrough of start, plus 3 mini-tutorials |
| Provider Setup | How to get and configure API keys for each provider |
| Command Reference | Every CLI command with flags, examples, and expected output |
| Skills Catalog | All 20 built-in skills with scenarios and decision tree |
| MCP Integration | How MCP works with GTM-OS, current status, and roadmap |
| Troubleshooting | Common errors and fixes, organized by layer |
| Background Agents | Agent architecture, creation, scheduling |
| Intelligence Store | Intelligence schema, categories, confidence lifecycle |
| Architecture | High-level project map |
| Systems Architecture | Deep dive into 8 core systems |
Configuration
YALC uses ~/.gtm-os/config.yaml for persistent configuration:
notion:
campaigns_ds: ""
leads_ds: ""
variants_ds: ""
parent_page: ""
unipile:
daily_connect_limit: 30
sequence_timing:
connect_to_dm1_days: 2
dm1_to_dm2_days: 3
rate_limit_ms: 3000
qualification:
rules_path: ~/.gtm-os/qualification_rules.md
exclusion_path: ~/.gtm-os/exclusion_list.md
disqualifiers_path: ~/.gtm-os/company_disqualifiers.md
cache_ttl_days: 30
crustdata:
max_results_per_query: 50
fullenrich:
poll_interval_ms: 2000
poll_timeout_ms: 300000
memory:
embeddings:
provider: voyage # voyage (default) | openaiEnv file precedence
YALC loads ~/.gtm-os/.env automatically on every run (followed by .env.local in the current working directory as a fallback). Variables already present in your shell environment win — ~/.gtm-os/.env only fills in keys that aren't already set. To stop using a provider, remove its line from ~/.gtm-os/.env rather than unset-ing it in your terminal, since the file is reloaded on the next invocation.
Key Design Decisions
- Intelligence everywhere: Every campaign outcome feeds the intelligence store. The system learns what works per segment/channel.
- Outbound validation: Every human-facing message passes through
validateMessage(). Hard violations block sends. - Rate limiting: DB-backed token bucket rate limiter on all external sends (LinkedIn connects, DMs, emails).
- No silent mocks: Provider registry throws
ProviderNotFoundErrorwith suggestions instead of silently falling back to mock data. - Transactions: All campaign tracker DB writes are wrapped in Drizzle transactions.
Contributing
- Follow the three-layer pattern: Service → Provider → Skill
- Run
pnpm typecheckafter every file change - Support
--dry-runon any command that sends or writes - Never log API keys — use
sk-...redactedpattern - Wire campaign outcomes to the intelligence store
License
MIT
