jobjourney-claude-plugin
v3.1.33
Published
FastMCP server for JobJourney - save jobs, track applications, AI job fit evaluation, cover letter generation, interview prep, coffee chat networking, and more
Maintainers
Readme
🚀 JobJourney Claude Plugin
A production-ready MCP server for JobJourney with AI job-search tools, local job discovery, and scheduled scraping from Claude.
✨ What It Does
- 🤖 AI job-search workflows for resume fit scoring, cover letters, CV generation, interview prep, and career chat
- 🗂️ Application tracking with saved jobs, notes, status changes, starring, search, and dashboard analytics
- 🔍 Local job discovery with a canonical discovery engine that stores results in local SQLite
- 🌐 Mixed scraping strategy: LinkedIn uses direct HTTP guest scraping, while blocked sites like SEEK use Playwright
- 🏢 ATS expansion for supported providers like Greenhouse and Lever after discovery
- ⏰ Scheduled discovery through the background agent and MCP tools
- 💾 Local storage for jobs, runs, schedules, and discovery reports in
~/.jobjourney/jobs.db
📸 Demo
Use it naturally from Claude:
"Use
discover_jobswith keywordfull stack, locationSydney, sourcesseek, pages1."
"Use
search_jobsand show me the latest LinkedIn roles in Sydney."
"Use
schedule_discoveryto run every day at 9am for backend jobs in Melbourne."
"Evaluate how well my resume matches this job and draft a cover letter."
If you want product screenshots or GIFs later, this is the right place to add them.
📦 Installation
Option A: Claude Code
claude mcp add jobjourney \
-e JOBJOURNEY_API_URL=https://server.jobjourney.me \
-e JOBJOURNEY_API_KEY=jj_your_api_key_here \
-e TRANSPORT=stdio \
-- npx -y jobjourney-claude-plugin@latestOption B: Claude Desktop
Add this to your Claude Desktop config file (claude_desktop_config.json):
{
"mcpServers": {
"jobjourney": {
"command": "npx",
"args": ["-y", "jobjourney-claude-plugin@latest"],
"env": {
"JOBJOURNEY_API_URL": "https://server.jobjourney.me",
"JOBJOURNEY_API_KEY": "jj_your_api_key_here",
"TRANSPORT": "stdio"
}
}
}
}Playwright prerequisite
For local browser-backed sources like SEEK, install a browser once:
npx playwright install chromium🚀 Quick Start
1. Connect the plugin
claude mcp add jobjourney \
-e JOBJOURNEY_API_URL=https://server.jobjourney.me \
-e JOBJOURNEY_API_KEY=jj_your_api_key_here \
-e TRANSPORT=stdio \
-- npx -y jobjourney-claude-plugin@latest2. Log in to browser-backed sites when needed
From Claude:
Use login_jobsite with site "seek"3. Run discovery
From Claude:
Use discover_jobs with keyword "full stack", location "Sydney", sources ["linkedin", "seek"], pages 14. Query the stored results
Use search_jobs with source "linkedin" and limit 55. Schedule it
Use schedule_discovery with keyword "full stack", location "Sydney", time "09:00", sources ["linkedin", "seek"]🔍 Source Support
| Source | Status | Transport | Notes |
|---|---|---|---|
| linkedin | Active | HTTP guest scraping | Primary supported LinkedIn path |
| seek | Active | Playwright | Local browser session support |
| indeed | Planned | Playwright | Not implemented yet |
| jora | Planned | Playwright | Not implemented yet |
| ATS | Support |
|---|---|
| greenhouse | Detect + expand |
| lever | Detect + expand |
| workday | Detect only |
| smartrecruiters | Detect only |
| ashby | Detect only |
🧠 How Local Discovery Works
The local discovery engine lives under src/discovery and uses one canonical job model across all sources.
- Fetch guest search results
- Fetch guest job detail HTML for each posting
- Extract description, metadata, and external apply URL
- Detect ATS from the external URL
- Expand supported ATS companies
SEEK
- Launch Playwright
- Use the browser-backed source flow
- Normalize results into the same canonical job schema
Storage
Local runs are stored in:
- jobs DB:
~/.jobjourney/jobs.db - agent heartbeat:
~/.jobjourney/agent-heartbeat.json
The database stores:
- discovered jobs
- scrape/discovery runs
- schedules
🛠 Key Tools
This MCP exposes a broad JobJourney toolset. For local discovery, these are the most important ones:
| Tool | What it does |
|---|---|
| discover_jobs | Run the canonical multi-source discovery engine and store results locally |
| search_jobs | Query jobs already stored in local SQLite |
| schedule_discovery | Schedule recurring local discovery runs |
| get_latest_discovery_report | Show the latest discovery batch summary |
| scrape_jobs | Legacy one-off local scrape path |
| login_jobsite | Save browser login state for supported sites |
| check_login_status | Confirm browser login state |
And the broader platform also includes:
- job tracking
- AI fit evaluation
- cover letter and CV generation
- mock interviews
- dashboard analytics
- coffee chat networking
- profile and document management
🏗 Architecture
src/
index.ts # FastMCP server entrypoint
tools/ # MCP tool registration
discovery/ # Canonical local discovery engine
core/ # orchestration and job types
sources/ # linkedin guest, seek browser, planned sources
ats/ # ATS detection and supported crawlers
analysis/ # salary, tech stack, PR, experience enrichment
fallback/ # optional company career-page probing
storage/ # discovery persistence adapters
parity/ # TS vs Python parity harness
scraper/ # legacy browser scraper layer, being phased down
storage/sqlite/ # SQLite repos and migrations
agent/ # background scheduling agent
config/ # path and runtime configBuilt with FastMCP, TypeScript, Zod, Playwright, and SQLite.
⚙️ Environment Variables
| Variable | Description | Default |
|---|---|---|
| JOBJOURNEY_API_URL | JobJourney backend base URL | https://server.jobjourney.me |
| JOBJOURNEY_API_KEY | API key for backend-authenticated features | - |
| TRANSPORT | MCP transport: stdio or httpStream | stdio |
| PORT | HTTP port when TRANSPORT=httpStream | 8080 |
🧪 Development
git clone https://github.com/Rorogogogo/jobjourney-claude-plugin.git
cd jobjourney-claude-plugin
npm install
npx playwright install chromium
npm run build
npm test
npm run typecheckUseful local commands:
npm run start
npm run agent
npm run parity:discovery
npm run parity:live-smoke🤝 Contributing
Contributions are welcome. Useful contribution areas right now:
- tightening the canonical
src/discoveryarchitecture - implementing
indeedandjora - improving live parity coverage
- reducing remaining legacy surface in
src/scraper
Standard flow:
git checkout -b feature/my-change
npm test
npm run typecheck
git commit -m "feat: my change"🔗 Links
📄 License
MIT © JobJourney
