@jiffylabs/jiffy-chat-mcp
v0.1.0
Published
Jiffy Chat MCP server — natural-language queries against your Jiffy org. Exposes jiffy_query, jiffy_recommend, and jiffy_next_step tools over the Model Context Protocol so Claude Desktop / Cursor / Claude Code can ask 'what are my riskiest artifacts?' and
Maintainers
Readme
@jiffylabs/jiffy-chat-mcp
MCP server that lets Claude Desktop / Cursor / Claude Code ask natural-language questions about your Jiffy org and get cited answers.
Three tools:
jiffy_query({ question, time_range?, tier_filter? })— answers questions like "what are the riskiest artifacts on my fleet in the last 24 hours". Returns an answer plus typed citations (inventory items, approvals, audit rows).jiffy_recommend({ finding_id })— given an inventory or approval UUID, proposes the next admin action (approve / quarantine / escalate / remediate) with a deep link back into the Jiffy UI.jiffy_next_step({ context })— open-ended "now what?". Maps the current UI context (page, artifact id) to the most likely next action.
Answers come from a bounded templated intent router on the server side — no free-form LLM-to-SQL, no prompt injection surface into your database.
Install
pnpm add -g @jiffylabs/jiffy-chat-mcpOr run without installing:
pnpm dlx @jiffylabs/jiffy-chat-mcpConfigure
The server reads an API key from JIFFY_API_KEY or ~/.jiffy/config.json — the same shape the Jiffy Intake CLI uses, so one key covers both.
export JIFFY_API_KEY=jtp_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export JIFFY_API_URL=https://jiffylabs.app # optional, this is the defaultOr drop a file at ~/.jiffy/config.json:
{
"apiKey": "jtp_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"apiUrl": "https://jiffylabs.app"
}Wire into Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"jiffy-chat": {
"command": "jiffy-chat",
"env": {
"JIFFY_API_KEY": "jtp_live_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
}
}
}Restart Claude Desktop. The three jiffy_* tools appear in the tool picker.
Wire into Cursor
Cursor uses the same MCP shape. Edit ~/.cursor/mcp.json:
{
"mcpServers": {
"jiffy-chat": {
"command": "jiffy-chat"
}
}
}Wire into Claude Code
Claude Code picks up MCP servers from ~/.claude/settings.json (or the project-local equivalent):
{
"mcpServers": {
"jiffy-chat": {
"command": "jiffy-chat",
"env": { "JIFFY_API_KEY": "jtp_live_..." }
}
}
}Example prompts
- "Jiffy, what are the riskiest artifacts on my fleet in the last 24 hours?"
- "How many approvals are pending?"
- "Which endpoints are running drift?"
- "Which publishers have the lowest score?"
- "Is
clawdbotinstalled anywhere?" - "Who installed
@acme/web-search?" - "What changed in the last 7 days?"
Supported intents (V.12a)
The server routes questions to one of ~15 bounded intents. Unknown questions return a canned "here are the questions I can answer" response.
top_risky_artifacts_last_24hpending_approvals_countendpoints_with_driftpublishers_with_lowest_scorefind_artifact_by_namewhat_changed_last_7dwho_installed_artifactcompliance_coverage_by_frameworkrecent_vet_deniesagents_by_tierartifacts_missing_attestationruntime_invocation_ratecritical_findings_openclawdbot_installedfallback_general
Non-goals (V.12a)
- No free-form LLM-backed SQL. Every query is a hand-written Supabase filter.
- No streaming responses.
jiffy_recommend+jiffy_next_stepare read-only.execute()exists on the server withdry_run: trueas default — wiring it to live mutations is a follow-up.- No conversation memory across tool calls.
- No Slack surface — that's V.12b.
License
Apache-2.0.
