npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@cmertdalli/polisci-review

v0.1.2

Published

LLM-agnostic political science manuscript review framework with Claude and Codex adapters.

Readme

PoliSci Review

A structured, LLM-agnostic pre-submission audit for political science manuscripts. Use it for a systematic review pass before circulation or submission.

PoliSci Review uses journal-aware personas, stage-aware standards, and evidence-grounded issue reporting to check your manuscript across nine modules — from contribution and theory to design, transparency, and journal fit. It works with Claude, Codex, or any LLM that can follow structured instructions.

Who This Is For

  • PhD students preparing articles for submission
  • Postdocs and early-career faculty wanting a structured pre-submission check
  • Anyone circulating a working paper who wants a more systematic review before circulation
  • Pre-submission workshops looking for a systematic review framework

What It Checks

The review battery runs nine modules:

  1. Contribution and literature positioning — is the contribution real, specific, and proportional?
  2. Writing, structure, and framing — does the paper read well and reach its point quickly?
  3. Internal consistency and citation integrity — do claims, numbers, and references match throughout?
  4. Theory, concepts, and scope conditions — is the theory coherent, are mechanisms argued, and are scope conditions bounded?
  5. Measurement and data construction — do operationalizations match concepts, and is data provenance clear?
  6. Design and identification — does the design support the inferential claim?
  7. Track-specific technical audit — are methods, quant, formal, qual, or mixed-methods specifics handled correctly?
  8. Transparency, reproducibility, ethics, and AI-use policy — is the paper audit-ready and policy-compliant?
  9. Journal fit and Reviewer #2 assessment — would this survive review at the target journal?

Every issue includes a module label, location anchor, evidence status, journal-policy reference, and a recommended fix.

Supported Journals

The initial release supports eight journals with verified policy profiles:

| Journal | Role | |---------|------| | APSR | Discipline-wide flagship | | AJPS | Methods-forward generalist | | JOP | Generalist, methodologically diverse | | BJPS | Pluralistic generalist | | PA | Political methodology | | InternationalSecurity | IR and security flagship | | JCR | Conflict studies flagship | | CPS | Comparative politics flagship |

Each journal profile includes verified submission guidelines, word limits, review model, data policy, and AI-use policy where available. Source URLs and verification dates live in core/journal-manifest.json.

Journal-Agnostic Mode

You do not need to target a specific journal. Running without a journal argument applies a top-field standard — discipline-wide best practices without journal-specific policy enforcement. This is recommended for early drafts and working papers.

Quick Start

Install the Claude adapter:

npx @cmertdalli/polisci-review install claude

Install the Codex adapter:

npx @cmertdalli/polisci-review install codex

Install both:

npx @cmertdalli/polisci-review install all

Inspect detected install paths:

npx @cmertdalli/polisci-review doctor

How to Use

polisci-review [JOURNAL] [TRACK] [STAGE] [PATH]

Tracks: methods, quant, formal, qual, mixed (default: auto)

Stages: article, proposal, dissertation (default: auto)

Defaults: journal=top-field, track=auto, stage=auto

Examples:

polisci-review AJPS quant article manuscript.tex
polisci-review CPS qual dissertation.tex
polisci-review manuscript.tex                    # journal-agnostic, auto-detect track and stage

Supported Manuscript Formats

The workflow is LaTeX-first. If .tex source exists, that is the preferred review path. It can also review:

  • .docx
  • .md
  • .txt
  • .pdf with readable text or OCR

For .docx and .pdf, support is best-effort because some runtimes cannot reliably extract text from those formats. When extraction is weak, the recommended fallback is an exported .txt or .md version. If both .tex and a rendered format exist, prefer .tex.

Output Contract

When the runtime allows file writes, the installed skill should write two files by default:

By default these files should be written next to the main manuscript. If the environment cannot write files, the skill should return the same content in the chat and say that file output was unavailable.

Sample outputs live in examples/sample-outputs/.

Choosing a Model

The framework is model-agnostic. Claude is the default recommendation because it currently gives the strongest balance of instruction-following and long-context reading for this workflow.

That said, model performance changes fast. Before choosing a model for production use:

  1. Check the benchmark viewer for current rankings. Models with high "Clear Pushback" rates are better at identifying real problems rather than inventing issues.
  2. Prefer models with strong instruction-following and long-context capabilities. Check the benchmark link above for current leaders; specific model names go stale quickly.
  3. Test with your own manuscripts. No benchmark replaces domain-specific evaluation.

Do not treat any model recommendation as permanent.

Repository Layout

core/        shared review contract, journal manifest, personas, and modules
adapters/    runtime-specific source files
examples/    installation docs, sample manuscripts, and sample outputs
templates/   packaged Claude and Codex adapter templates
tests/       contract, installer, and fixture validation

Why This Exists

This project grew out of Claes Bäckman's econ-focused AI-research-feedback, which uses six parallel agents to review economics papers. I adapted the idea for political science and organized it around a 9-module review battery with journal-specific personas, stage-aware standards, and a machine-readable issue contract. The aim is a structured review framework for political science rather than a generic manuscript-commenting prompt.

Limitations

This is structured AI review, not editorial guidance from any journal. It can:

  • miss obvious problems or overstate weak ones
  • lag behind journal policy changes
  • hallucinate issues that do not exist in the manuscript
  • fail to catch subtle methodological flaws that a human reviewer would spot

Always verify citations, formulas, design claims, and current journal rules before acting on the output. Each report includes a limitations note for that reason.

Installation Details

See examples/installation.md for adapter-specific install behavior and overwrite behavior.

Contact

Suggestions, bug reports, and journal-profile contributions are welcome. Reach me at [email protected].

License

MIT