@cmertdalli/polisci-review
v0.1.2
Published
LLM-agnostic political science manuscript review framework with Claude and Codex adapters.
Maintainers
Readme
PoliSci Review
A structured, LLM-agnostic pre-submission audit for political science manuscripts. Use it for a systematic review pass before circulation or submission.
PoliSci Review uses journal-aware personas, stage-aware standards, and evidence-grounded issue reporting to check your manuscript across nine modules — from contribution and theory to design, transparency, and journal fit. It works with Claude, Codex, or any LLM that can follow structured instructions.
Who This Is For
- PhD students preparing articles for submission
- Postdocs and early-career faculty wanting a structured pre-submission check
- Anyone circulating a working paper who wants a more systematic review before circulation
- Pre-submission workshops looking for a systematic review framework
What It Checks
The review battery runs nine modules:
- Contribution and literature positioning — is the contribution real, specific, and proportional?
- Writing, structure, and framing — does the paper read well and reach its point quickly?
- Internal consistency and citation integrity — do claims, numbers, and references match throughout?
- Theory, concepts, and scope conditions — is the theory coherent, are mechanisms argued, and are scope conditions bounded?
- Measurement and data construction — do operationalizations match concepts, and is data provenance clear?
- Design and identification — does the design support the inferential claim?
- Track-specific technical audit — are methods, quant, formal, qual, or mixed-methods specifics handled correctly?
- Transparency, reproducibility, ethics, and AI-use policy — is the paper audit-ready and policy-compliant?
- Journal fit and Reviewer #2 assessment — would this survive review at the target journal?
Every issue includes a module label, location anchor, evidence status, journal-policy reference, and a recommended fix.
Supported Journals
The initial release supports eight journals with verified policy profiles:
| Journal | Role |
|---------|------|
| APSR | Discipline-wide flagship |
| AJPS | Methods-forward generalist |
| JOP | Generalist, methodologically diverse |
| BJPS | Pluralistic generalist |
| PA | Political methodology |
| InternationalSecurity | IR and security flagship |
| JCR | Conflict studies flagship |
| CPS | Comparative politics flagship |
Each journal profile includes verified submission guidelines, word limits, review model, data policy, and AI-use policy where available. Source URLs and verification dates live in core/journal-manifest.json.
Journal-Agnostic Mode
You do not need to target a specific journal. Running without a journal argument applies a top-field standard — discipline-wide best practices without journal-specific policy enforcement. This is recommended for early drafts and working papers.
Quick Start
Install the Claude adapter:
npx @cmertdalli/polisci-review install claudeInstall the Codex adapter:
npx @cmertdalli/polisci-review install codexInstall both:
npx @cmertdalli/polisci-review install allInspect detected install paths:
npx @cmertdalli/polisci-review doctorHow to Use
polisci-review [JOURNAL] [TRACK] [STAGE] [PATH]Tracks: methods, quant, formal, qual, mixed (default: auto)
Stages: article, proposal, dissertation (default: auto)
Defaults: journal=top-field, track=auto, stage=auto
Examples:
polisci-review AJPS quant article manuscript.tex
polisci-review CPS qual dissertation.tex
polisci-review manuscript.tex # journal-agnostic, auto-detect track and stageSupported Manuscript Formats
The workflow is LaTeX-first. If .tex source exists, that is the preferred review path. It can also review:
.docx.md.txt.pdfwith readable text or OCR
For .docx and .pdf, support is best-effort because some runtimes cannot reliably extract text from those formats. When extraction is weak, the recommended fallback is an exported .txt or .md version. If both .tex and a rendered format exist, prefer .tex.
Output Contract
When the runtime allows file writes, the installed skill should write two files by default:
review-report.mdfollowingcore/report-template.mdreview-report.jsonfollowingcore/issue-schema.json
By default these files should be written next to the main manuscript. If the environment cannot write files, the skill should return the same content in the chat and say that file output was unavailable.
Sample outputs live in examples/sample-outputs/.
Choosing a Model
The framework is model-agnostic. Claude is the default recommendation because it currently gives the strongest balance of instruction-following and long-context reading for this workflow.
That said, model performance changes fast. Before choosing a model for production use:
- Check the benchmark viewer for current rankings. Models with high "Clear Pushback" rates are better at identifying real problems rather than inventing issues.
- Prefer models with strong instruction-following and long-context capabilities. Check the benchmark link above for current leaders; specific model names go stale quickly.
- Test with your own manuscripts. No benchmark replaces domain-specific evaluation.
Do not treat any model recommendation as permanent.
Repository Layout
core/ shared review contract, journal manifest, personas, and modules
adapters/ runtime-specific source files
examples/ installation docs, sample manuscripts, and sample outputs
templates/ packaged Claude and Codex adapter templates
tests/ contract, installer, and fixture validationWhy This Exists
This project grew out of Claes Bäckman's econ-focused AI-research-feedback, which uses six parallel agents to review economics papers. I adapted the idea for political science and organized it around a 9-module review battery with journal-specific personas, stage-aware standards, and a machine-readable issue contract. The aim is a structured review framework for political science rather than a generic manuscript-commenting prompt.
Limitations
This is structured AI review, not editorial guidance from any journal. It can:
- miss obvious problems or overstate weak ones
- lag behind journal policy changes
- hallucinate issues that do not exist in the manuscript
- fail to catch subtle methodological flaws that a human reviewer would spot
Always verify citations, formulas, design claims, and current journal rules before acting on the output. Each report includes a limitations note for that reason.
Installation Details
See examples/installation.md for adapter-specific install behavior and overwrite behavior.
Contact
Suggestions, bug reports, and journal-profile contributions are welcome. Reach me at [email protected].
License
MIT
