Research Intermediate User Prompt

Cross-Study Evidence Comparator

For analysts comparing multiple studies to find consensus, contradictions, and evidence quality fast.

๐Ÿ”ฌ
Rating
4.8
Difficulty
Intermediate
Format
User Prompt
Variables
4
Download Prompt FREE

Best for these models

โ— Claude Opus 4.6 โ— Gemini 3.1 Pro โ— ChatGPT (GPT-5.4)

๐Ÿ“‹ The Prompt

User Prompt .txt

๐Ÿ”’ Prompt available in download

Get the full prompt text in a downloadable .txt file. Free, no signup required.

Download Prompt

Variables to fill in

{{SOURCE_SET}} โ€” Replace with your input
{{COMPARISON_DIMENSIONS}} โ€” Replace with your input
{{OUTPUT_DEPTH}} โ€” Replace with your input
{{EVIDENCE_THRESHOLD}} โ€” Replace with your input

About this prompt

Cross-Study Evidence Comparator is built for fast, disciplined comparison across multiple research sources. Instead of producing isolated summaries, it forces the model to line up studies side by side, compare claims, and identify where results converge or diverge. This is useful when you need a source comparison before writing a report, memo, or academic synthesis.

The template is especially helpful for consultants, researchers, and strategy teams who must evaluate whether evidence is consistent enough to support a recommendation. It can compare sample sizes, methods, populations, timeframes, and conclusions, then explain why two papers may disagree. That makes it valuable for evidence-based decision-making, due diligence, and technical research briefs where precision matters more than volume.

Customize the prompt by listing your sources in {{SOURCE_SET}} and naming the comparison criteria in {{COMPARISON_DIMENSIONS}}. Use {{OUTPUT_DEPTH}} to control whether you want a quick verdict or a detailed matrix. If you need a stricter standard, add {{EVIDENCE_THRESHOLD}} so the model separates strong conclusions from weak or speculative ones. The output is designed to be easy to scan, cite, and reuse in downstream writing.

Key features

  • source comparison across methods, claims, and populations
  • Ranks evidence quality instead of treating all studies equally
  • Explains contradictions with methodological context
  • Creates a clear comparison matrix for reports and memos
  • Useful for evidence-based decisions and literature triage

Best for

  • โ†’ Consultants validating claims across competing studies
  • โ†’ Policy teams comparing public health or social science evidence
  • โ†’ Researchers deciding which papers deserve deeper reading

Tips

  • ๐Ÿ’ก Use explicit comparison dimensions like sample, method, and outcome
  • ๐Ÿ’ก Ask for confidence levels when evidence quality matters
  • ๐Ÿ’ก Include only studies relevant to the same question or population

What you'll get

A comparison matrix showing each studyโ€™s methods, sample, main claim, and confidence level, followed by consensus points, disagreements, and a final evidence verdict. The result helps you decide what is well-supported, what is uncertain, and what should be excluded from decision-making.

Preparing your download...

Download Prompt

Related prompts