If you read scientific papers for work, study, or curiosity, you’ve probably felt the pain: dense writing, missing context, unclear methods, and “so what?” conclusions. These prompts turn ChatGPT into a structured scientific research article reviewer helping you quickly assess quality, extract insights, spot weaknesses, and translate findings into action.
What this prompt set will do
- Help you evaluate quality and credibility instead of just summarizing.
- Turn complex papers into structured reviews with clear verdicts and action items.
- Surface methodological, statistical, and reporting weaknesses fast.
- Convert research into practical insights for real decisions and next steps.
1) The “Peer-Review Style” Full Report (journal reviewer mode)
Prompt:
You are an expert scientific peer reviewer. Review the paper I provide (title + abstract + key sections, or full text). Produce a structured review with:
Summary (150–250 words): research question, approach, key results, conclusion.
Novelty & significance: what’s new vs prior work; why it matters; who benefits.
Methodology evaluation: design appropriateness, sampling, controls, randomization, blinding (if relevant), measurement validity, confounders.
Statistics & analysis: suitability of tests/models, assumptions, effect sizes, confidence intervals, multiple comparisons, power, missing data handling.
Results integrity: do results support claims; alternative explanations; robustness; sensitivity checks that should exist.
Clarity & reproducibility: missing details, data/code availability, protocol transparency.
Major concerns (ranked): top 5 issues with evidence and why each matters.
Minor concerns: clarity, formatting, missing citations, figure readability.
Actionable recommendations: exactly what authors should change/add.
Decision recommendation: Accept / Minor / Major / Reject + justification.
Ask up to 5 targeted questions only if essential info is missing; otherwise proceed with best-effort review based on what I provide.
Paper content: [PASTE HERE]
2) Rapid “Is This Paper Trustworthy?” Credibility Audit (10-minute scan)
Prompt:
You are a scientific credibility auditor. I’ll paste a paper’s abstract + methods + results highlights. Your job is to produce a credibility scorecard with:
Study type (RCT, cohort, case-control, cross-sectional, meta-analysis, modeling, etc.) and what that implies about causality.
Top 7 red flags (p-hacking risk, weak controls, selection bias, HARKing, small N, unclear outcomes, questionable metrics, overclaiming).
Top 5 trust signals (preregistration, power calc, robust stats, replication, transparent data, strong controls).
Claim-to-evidence mapping: list each major claim → the exact evidence that supports it → confidence level (High/Med/Low).
What would change your confidence: 3 missing analyses or details.
Final verdict: “Likely reliable / mixed / questionable” with a 1–10 confidence rating.
Paper excerpt: [PASTE HERE]
3) “Methods Stress Test” (tear the methods apart constructively)
Prompt:
You are a methods specialist. Critique ONLY the methods of the paper below.
Output:Design fit: is the design appropriate for the question?
Population & sampling: inclusion/exclusion, representativeness, selection bias risks.
Variables & measurement: operational definitions, validity, reliability, measurement error.
Controls & confounding: what is controlled; what should be controlled; likely confounders.
Procedural clarity: what’s missing that prevents replication (list missing details explicitly).
Improved method: propose a stronger alternative design (and why), within realistic constraints.
Include a “Replicability Checklist” at the end with yes/no + missing items.
Methods section: [PASTE HERE]
4) “Stats & Results Sanity Check” (numbers, assumptions, and overclaims)
Prompt:
You are a biostatistician / quantitative reviewer. Evaluate the statistics and results below.
Deliver:Whether chosen tests/models match the data and design.
Any assumption violations likely (normality, independence, multicollinearity, proportional hazards, etc.).
Whether they reported: effect sizes, CIs, corrections for multiple testing, robustness checks.
Any signs of: overfitting, leakage (ML), cherry-picking, selective reporting.
A “What I would re-run” section: the exact analyses to repeat, including alternatives.
A plain-language summary: “What the numbers actually say” vs “what the authors claim.”
Stats/results excerpt: [PASTE HERE]
5) “Figure & Table Interrogation” (extract the truth from visuals)
Prompt:
You are a scientific figure detective. I will paste figure captions, table data, and/or results text.
For each figure/table:Identify the main takeaway in one sentence.
Note what’s missing (axes, units, error bars, n, statistical annotations, baselines).
Determine whether it supports the stated claim (Yes/Partially/No) and why.
List 2–3 alternative interpretations.
End with: “If I could only keep 3 visuals, they would be…” and explain.
Figures/tables: [PASTE HERE]
6) “Literature Positioning & Novelty Check” (what’s actually new?)
Prompt:
You are a domain literature strategist. Using only the paper text I provide (and not browsing), assess:
What the authors claim is novel vs what seems incremental.
The implied prior art: list the 5–10 “missing but likely relevant” prior work themes to cite (by topic, not specific papers).
Whether the framing is fair or biased (straw-manning, ignored counterevidence).
A rewritten “Related Work / Background” paragraph that is more balanced and precise.
Paper excerpt: [PASTE HERE]
7) “Reproducibility & Open Science Checklist” (can someone replicate this?)
Prompt:
You are a reproducibility reviewer. Score the paper on replicability (0–100) using:
Protocol clarity
Data availability & structure
Code availability & environment
Parameter reporting
Materials/instruments
Ethical approvals (if human/animal)
Reporting completeness (CONSORT/STROBE/PRISMA-style expectations depending on study type)
For each category: give score + what’s missing + how to fix.
End with a “Minimal Replication Packet” list of exactly what the authors should provide.
Paper content: [PASTE HERE]
8) “Peer Review Comments That Authors Can Actually Use” (copy-paste ready)
Prompt:
You are writing reviewer comments for authors. Make them specific, actionable, and polite.
Output sections:Major issues (max 6): Each must include (a) what’s wrong, (b) why it matters, (c) exactly what to do, (d) what evidence would satisfy you.
Minor issues (max 10): clarity, definitions, missing details.
Suggested experiments/analyses (max 5): feasible and high value.
Rewrite suggestions: 3 key sentences that should be rewritten, with improved versions.
Paper excerpt: [PASTE HERE]
9) “Reviewer #2 Mode (but fair)” (harsh scrutiny without being toxic)
Prompt:
Be a skeptical but fair “Reviewer #2.” Your goal is to pressure-test the paper’s claims.
Deliver:10 sharp questions the paper must answer to be convincing.
The strongest alternative explanation for the results.
The single biggest fatal flaw (if any) and what would fix it.
The most convincing part of the paper (so the critique is balanced).
A final judgment: “Convincing / not yet / not convincing” with reasons.
Paper excerpt: [PASTE HERE]
10) “Translate Paper → Practical Use” (for clinicians, engineers, policy, etc.)
Prompt:
You are a science-to-practice translator. Summarize the paper for a real-world decision-maker.
Output:What problem does this solve?
What did they do? (simple but accurate)
What is the strongest evidence?
What are the limitations and risks of applying it?
Where it works / where it won’t (boundary conditions).
Action checklist: what someone should do next if they want to use these findings.
Also include a “What would I need to see before I bet money on this?” section.
Paper excerpt: [PASTE HERE]
How to use this
- Paste title + abstract first, then add methods/results if you want deeper critique.
- Use prompt #2 for quick filtering, then #1 or #8 for a full review.
- If you have tables/figures, use prompt #5 to extract “what the data truly says.”
- Ask ChatGPT to quote the exact sentence it’s critiquing (from your pasted text) to keep feedback grounded.
Tips to get best results
- Include: study design, sample size, primary outcome, and statistical approach (even in shorthand).
- If the paper is long, paste methods + results + limitations (highest ROI sections).
- Tell ChatGPT your context: “I’m a PhD student,” “I’m a clinician,” “I’m doing a systematic review,” etc.
- Request output format constraints: “max 300 words,” “bullet points only,” or “rank issues by severity.”

Add comment