npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@hanna84/mcp-writing

v2.12.15

Published

MCP service for AI-assisted reasoning and editing on long-form fiction projects

Downloads

10,978

Readme

mcp-writing

CI GitHub release npm version npm downloads Node.js License: AGPL v3

An MCP service for AI-assisted reasoning and editing on long-form fiction projects.

Designed to work with OpenClaw but compatible with any MCP-capable AI gateway.

Quick launch

For local stdio MCP clients, run the published package directly:

WRITING_SYNC_DIR=/path/to/sync-dir DB_PATH=./writing.db npx -y @hanna84/mcp-writing

The CLI wrapper defaults to stdio transport and adds the Node 22 SQLite flag automatically when needed.

What it does

Instead of feeding an entire manuscript to an AI and hoping it fits in the context window, mcp-writing builds a structured index from your scene files. The AI queries that index first — finding relevant characters, beats, and loglines — then loads only the specific prose it needs.

Current status:

  • Phase 1-3 completed: Metadata-first analysis, sidecar-backed metadata maintenance, and AI-assisted prose editing with confirmation + git history.
  • Phase 4 delivered in part: Review bundles and Scrivener Direct extraction are complete; embedding search and reference-doc querying are intentionally deferred.
  • Phase 5 in progress: OpenClaw runtime integration is active.

Who it is for

  • Novelists and writing teams working on long manuscripts with many scenes, characters, and continuity constraints.
  • AI-assisted editing workflows where you want targeted context retrieval instead of full-manuscript prompting.
  • Projects that need traceable, reversible edits with metadata that stays synchronized as drafts evolve.

Documentation

| Guide | Description | |---|---| | docs/setup.md | Prerequisites, first-time setup, Scrivener import, native sync format | | docs/docker.md | Docker Compose, OpenClaw integration, SSH hardening | | docs/data-ownership.md | Which tools write which files, import safety rules | | docs/tools.md | Full tool reference — auto-generated from source | | docs/development.md | Running locally, tests, environment variables, troubleshooting |

Usage scenarios

1) Continuity pass before sending chapters to beta readers

Goal: catch inconsistencies before sharing pages.

  1. Run sync after your latest writing session.
  2. Ask find_scenes for scenes involving a specific character or tag (for example, all scenes tagged injury or promise).
  3. Use get_arc to review that character's ordered progression across the manuscript.
  4. Load only the suspect scenes with get_scene_prose.
  5. Attach follow-up notes with flag_scene where continuity needs a fix.

Outcome: you review one narrative thread at a time instead of rereading the entire novel to find contradictions.

2) Planning and tracking subplot beats during revisions

Goal: make sure subplot threads progress intentionally and resolve on time.

  1. Run list_threads for the project.
  2. Use get_thread_arc to inspect scene order and beat labels for each thread.
  3. When a beat is missing, call upsert_thread_link to add or update it on the right scene.
  4. Re-run get_thread_arc to confirm pacing and coverage.

Outcome: subplot structure stays visible and auditable, which reduces dropped threads in late drafts.

3) Tightening scene metadata after heavy prose edits

Goal: keep indexes accurate without manually re-tagging everything.

  1. After rewriting scenes, call enrich_scene to re-derive lightweight metadata from current prose.
  2. Use update_scene_metadata for intentional editorial fields (for example, beat, POV, timeline position, and tags).
  3. Use search_metadata and find_scenes to verify scenes are discoverable under the expected filters.

Outcome: your AI assistant can reliably find the right scenes without drifting from the manuscript.

4) Safe AI-assisted line edits with rollback

Goal: let AI propose prose edits without losing control of your draft.

  1. Ask the AI to call propose_edit for a specific scene.
  2. Review the staged diff.
  3. Accept with commit_edit or reject with discard_edit.
  4. Use list_snapshots (and optional snapshot_scene) to inspect or preserve revision history.

Outcome: you get AI speed with explicit approval and recoverable history for every applied change.

5) Refreshing scene-character links after imports or major rewrites

Goal: rebuild scene-to-character links in a controlled way after imported prose changes or metadata drift.

  1. Start with enrich_scene_characters_batch using the default dry_run=true to preview inferred links for a project, chapter, or explicit scene list.
  2. Poll get_async_job_status until the batch job completes, then review job.result.results for changed scenes, ambiguous matches, and partial failures.
  3. Spot-check a few affected scenes with get_scene_prose if the changes touch important continuity or cast-heavy chapters.
  4. Re-run enrich_scene_characters_batch with dry_run=false once the preview looks correct.
  5. If you want a destructive overwrite instead of additive merge behavior, use replace_mode=replace with confirm_replace=true deliberately.

Outcome: character-link maintenance becomes a preview-first batch operation instead of a one-off regex script or manual sidecar cleanup.

License

AGPL-3.0-only