Understanding manuscript rejection reasons is one of the most valuable investments a researcher can make before submitting to a journal. Rejection rates at high-impact journals range from 60% to 95%, and even mid-tier journals reject more than half of all submissions. The reasons behind these rejections follow predictable patterns, patterns that are largely preventable when you know what editors and reviewers are looking for.
Most researchers experience rejection at some point in their career. The difference between those who publish consistently and those who struggle is not innate talent but systematic awareness of what causes manuscripts to fail. Peer review evaluates manuscript quality across methodology, analysis, interpretation, and presentation. When any of these dimensions falls below the journal's threshold, rejection follows. This guide breaks down the most common rejection reasons at both the desk review and peer review stages, with specific attention to systematic reviews and meta-analyses, and provides actionable strategies to reduce your rejection risk.
Desk Rejection vs Peer Review Rejection
Before a manuscript reaches external reviewers, it must pass through the editor's initial screening. This gatekeeping step, known as desk rejection, eliminates manuscripts that clearly do not meet the journal's requirements. Understanding the distinction between desk rejection and peer review rejection is critical because the causes, timelines, and remedies differ substantially.
Desk rejection occurs within days of submission. The handling editor evaluates whether the manuscript falls within the journal's scope, meets basic formatting and reporting requirements, and demonstrates sufficient quality to warrant external review. At high-impact journals, desk rejection rates can exceed 50%. The editor makes this decision without sending the manuscript to reviewers, which means no detailed feedback is provided, only a brief reason for the decision.
Peer review rejection occurs after external experts have evaluated the manuscript in detail. This process takes weeks to months and produces specific, actionable feedback. Peer review rejection reasons are typically methodological: flawed study design, inadequate statistical analysis, unsupported conclusions, or insufficient novelty. While peer review rejection is more time-consuming, the feedback it generates is invaluable for strengthening the manuscript before resubmission elsewhere.
The following table summarizes the key differences:
| Dimension | Desk Rejection | Peer Review Rejection |
|---|---|---|
| Timeline | 1-7 days | 4-16 weeks |
| Decision maker | Editor | External reviewers + editor |
| Feedback detail | Minimal (1-2 sentences) | Detailed (multiple pages) |
| Common causes | Scope, formatting, language | Methodology, statistics, novelty |
| Resubmission | Different journal | Same journal (if invited) or different |
| Prevention | Pre-submission checklist | Rigorous methodology + reporting |
Both types of rejection are part of the publishing process, and both can be minimized through careful preparation. The sections below detail the specific causes of each.
Top Reasons for Desk Rejection
Desk rejection is frustrating because it happens quickly and provides little feedback. However, the causes are well-documented and almost entirely preventable. Here are the most common reasons editors reject manuscripts without external review.
Scope mismatch is the single most preventable reason for desk rejection. Every journal publishes an aims-and-scope statement that defines the topics, methods, and populations it covers. Submitting a qualitative nursing study to a quantitative epidemiology journal wastes everyone's time. Before submitting, read the journal's aims and scope carefully, review the last two years of published articles, and, if uncertain, send a pre-submission inquiry to the editor. This five-minute step prevents weeks of wasted effort.
Poor English and readability causes immediate desk rejection at many international journals. Editors are not language teachers. If the manuscript contains frequent grammatical errors, unclear sentence structure, or awkward phrasing that obscures the scientific content, editors will reject it rather than attempt to decode the meaning. Non-native English speakers should invest in professional language editing before submission. This is not about accent or style, it is about whether the science can be understood.
Formatting non-compliance seems trivial but signals a lack of attention to detail. Each journal publishes detailed author guidelines covering reference style, word count limits, figure formats, and manuscript structure. Submitting a manuscript formatted for a different journal, or ignoring the word count limit by 30%, tells the editor you did not read the guidelines. Some journals have dedicated editorial assistants who check compliance before the editor even sees the manuscript.
Incomplete reporting is increasingly grounds for desk rejection, especially in clinical and health research. Many journals now require completed reporting checklists at submission, CONSORT for randomized controlled trials, STROBE for observational studies, PRISMA 2020 for systematic reviews (Page et al., 2021). Submitting without the required checklist, or submitting a checklist with missing items, results in immediate return. Understanding how to structure a medical manuscript according to the IMRAD framework helps prevent structural reporting failures.
Ethical concerns trigger desk rejection when the manuscript lacks required ethics approvals, informed consent documentation, or trial registration. For clinical research, editors verify that the study was approved by an institutional review board and registered on a recognized platform (e.g., ClinicalTrials.gov). For systematic reviews, PROSPERO registration is increasingly expected. Missing ethics documentation is a non-negotiable rejection.
Duplicate or overlapping publication, also known as self-plagiarism, occurs when substantial portions of the manuscript have been published previously. Journals use plagiarism detection software (iThenticate, Turnitin) to screen submissions. If the similarity index exceeds the journal's threshold, the manuscript is desk-rejected and the authors may be flagged for investigation.
Top Reasons for Peer Review Rejection
When a manuscript survives desk screening and reaches external reviewers, a different set of concerns emerges. Peer review rejection reasons center on the scientific rigor of the work rather than formatting or scope.
Methodological weaknesses are the leading cause of peer review rejection across disciplines. Reviewers evaluate whether the study design is appropriate for the research question, whether the methods are described with sufficient detail for replication, and whether potential confounders have been addressed. Common methodological failures include inadequate control groups, selection bias in participant recruitment, and failure to blind assessors. When reviewers identify a fundamental design flaw, the manuscript is typically rejected rather than sent for revision, because design flaws cannot be fixed after data collection.
Statistical errors and inappropriate analyses rank among the most cited rejection reasons. Reviewers with statistical expertise, and many journals assign at least one statistical reviewer, scrutinize sample size calculations, the choice of statistical tests, handling of missing data, and the interpretation of results. Common failures include using parametric tests on non-normal data, failing to adjust for multiple comparisons, reporting p-values without effect sizes and confidence intervals, and inadequate power analysis. Matching your statistical analysis to your study design is non-negotiable. Resources on statistical analysis best practices, including our medical writing services guide, can help you prepare a manuscript that withstands statistical scrutiny.
Overstated conclusions occur when the Discussion section claims more than the data support. Reviewers are trained to identify overclaiming, conclusions that extend beyond the study population, causal claims from observational data, or clinical recommendations from preliminary findings. The fix is precise language: "suggests" rather than "proves," "associated with" rather than "causes," and explicit acknowledgment of limitations that constrain interpretation.
Insufficient novelty or contribution leads to rejection when reviewers determine that the manuscript does not advance knowledge beyond what is already published. This is particularly common in fields with extensive existing literature. To demonstrate novelty, your Introduction must clearly articulate the gap in existing knowledge, your methods must address that gap, and your Discussion must explain how your findings change the current understanding.
Inadequate literature review signals to reviewers that the authors are unfamiliar with their field. Missing key references, especially recent, high-impact studies, suggests the research was conducted in a vacuum. Reviewers expect the Introduction to situate the study within current knowledge and the Discussion to compare findings with relevant prior work. A comprehensive literature search strategy is essential.
Poor presentation and organization encompasses unclear writing, illogical flow, missing methods details, and figures or tables that fail to communicate results effectively. Even methodologically sound research can be rejected if reviewers cannot follow the argument. The IMRAD structure (Introduction, Methods, Results, and Discussion) exists for a reason, it provides a predictable framework that reviewers can navigate efficiently.
Manuscript Rejection Reasons in Systematic Reviews
Systematic reviews and meta-analyses face all the general rejection reasons above, plus a set of discipline-specific concerns. In our manuscript revision work, incomplete PRISMA 2020 compliance and missing PROSPERO registration account for nearly half of SR rejections we see. Understanding these SR-specific rejection reasons is essential for researchers in evidence synthesis.
Non-compliance with PRISMA 2020 reporting guidelines is the most common SR-specific rejection reason. The PRISMA 2020 statement (Page et al., 2021) expanded the checklist from 27 to 27 items with sub-items, added requirements for reporting certainty assessments, and introduced a new flow diagram template. Journals increasingly require a completed PRISMA checklist at submission, and reviewers verify compliance item by item. Our PRISMA 2020 flow diagram tool helps researchers generate compliant flow diagrams before submission. SR follows PRISMA 2020 reporting guidelines as a matter of methodological standard, deviation from this standard is grounds for rejection at most evidence-synthesis journals.
Inadequate search strategy undermines the foundation of any systematic review. Reviewers expect a comprehensive, reproducible search across multiple databases (minimum: MEDLINE, Embase, and Cochrane Library for health reviews), with the full search syntax reported for at least one database. Searches limited to a single database, missing grey literature, or using overly narrow terms raise concerns about selection bias in study identification. Selection bias informs evidence quality, if the search missed relevant studies, the conclusions cannot be trusted.
Flawed risk of bias assessment occurs when reviewers find that the quality assessment of included studies was conducted improperly. Using an inappropriate risk of bias tool (e.g., the Newcastle-Ottawa Scale for randomized trials instead of RoB 2), failing to have two independent assessors, or not incorporating bias findings into the synthesis all trigger rejection. Risk of bias informs evidence quality, and reviewers examine whether the authors adequately considered how study quality affects pooled estimates.
Missing or inappropriate meta-analysis is flagged when reviewers identify statistical issues in the quantitative synthesis. Common problems include pooling clinically heterogeneous studies, using fixed-effects models when significant heterogeneity is present, failing to conduct sensitivity analyses, and not investigating sources of heterogeneity through subgroup analyses or meta-regression. Peer review evaluates manuscript quality with particular scrutiny on whether the statistical approach matches the data.
Absent certainty-of-evidence assessment is increasingly a rejection trigger. The GRADE framework is the standard for assessing certainty of evidence in systematic reviews. GRADE framework assesses certainty of evidence by evaluating risk of bias, imprecision, inconsistency, indirectness, and publication bias. Reviewers and editors at evidence-synthesis journals now expect a Summary of Findings table with GRADE ratings. Omitting this assessment signals unfamiliarity with current methodological standards (Cochrane Handbook for Systematic Reviews of Interventions, 2024).
No protocol registration weakens the manuscript's credibility. Registration on PROSPERO or another recognized platform (e.g., OSF) before conducting the review demonstrates that the review was planned prospectively. Reviewers scrutinize unregistered reviews for selective reporting, the concern that results may have influenced which outcomes or analyses were reported. PRISMA 2020 ensures reporting transparency by requiring disclosure of protocol registration status.
| SR-Specific Issue | Why Reviewers Flag It | How to Prevent It |
|---|---|---|
| PRISMA non-compliance | Missing checklist items | Complete checklist before submission |
| Inadequate search | Risk of missing studies | Multi-database, reproducible syntax |
| Poor risk of bias | Wrong tool or single assessor | RoB 2 / ROBINS-I, dual assessment |
| Statistical issues | Inappropriate pooling | Test heterogeneity, justify model |
| No GRADE assessment | Missing certainty evaluation | Summary of Findings table |
| No PROSPERO registration | Selective reporting concern | Register before screening |
How to Reduce Your Manuscript Rejection Risk
Prevention is more efficient than revision. The following strategies address the most common rejection reasons and can be implemented before you submit.
Choose the right journal first. Scope mismatch is the most preventable rejection reason. Create a shortlist of 3-5 target journals by reading their aims and scope statements, reviewing recent publications, and checking impact factor and acceptance rates. Use journal finder tools (Elsevier Journal Finder, Springer Journal Suggester) to match your manuscript to appropriate outlets. If uncertain, send a pre-submission inquiry with your title and abstract, editors will tell you within days whether your topic fits.
Complete the relevant reporting checklist. Before submitting, work through every item on the applicable reporting guideline: PRISMA 2020 for systematic reviews, CONSORT for randomized trials, STROBE for observational studies, ARRIVE for animal research. These checklists were designed by methodologists to prevent common reporting failures. If you cannot complete an item, address the gap in the manuscript or explain why the item is not applicable.
Conduct a pre-submission peer review. Have a colleague outside your immediate team read the manuscript critically. Fresh eyes catch scope mismatches, logical gaps, unclear methods descriptions, and overclaimed conclusions that the authors, who have been immersed in the project for months, no longer notice. Ideally, find someone with expertise in your methods and someone in an adjacent field who can assess accessibility.
Verify your statistical analysis. Ensure every statistical test is appropriate for your data type and study design. Report effect sizes with confidence intervals, not just p-values. Justify your sample size with a prospective power analysis. If your analysis is complex, consider having a biostatistician review it before submission. Statistical errors are among the hardest to fix after peer review because they may require new data collection.
Proofread and format meticulously. Use the journal's author guidelines as a checklist: reference style, word count, figure resolution, supplementary material format. Have a native English speaker proofread the manuscript if English is not your first language. Run the manuscript through grammar-checking software as a baseline, but do not rely on it for scientific terminology.
Write a compelling cover letter. The cover letter is your first communication with the editor. State the research question, the key finding, why the work is important, and why this journal is the right venue. A good cover letter demonstrates that you understand the journal's audience and have chosen it deliberately, not as one of twenty simultaneous submissions.
Register your study protocol. For systematic reviews, register on PROSPERO before beginning screening. For clinical trials, register on ClinicalTrials.gov or an equivalent registry before enrollment. Protocol registration protects against accusations of selective reporting and is required by most journals.
What to Do After Rejection
Rejection is not the end of a manuscript's life, it is a redirection. How you respond to rejection determines whether your research ultimately reaches publication. Learning how to respond to peer reviewers with a structured point-by-point approach is essential when rejection includes an invitation to revise and resubmit.
Read the rejection feedback carefully. If the rejection came after peer review, you have received detailed, expert feedback on your manuscript. Read every comment without defensiveness. Distinguish between comments that identify genuine weaknesses (which must be addressed) and comments that reflect reviewer preferences (which may be addressed at your discretion). Categorize comments by severity: fundamental design concerns, analytical issues, presentation problems, and minor points.
Determine whether revision is needed before resubmission. If the reviewers identified real methodological problems, submit the manuscript elsewhere only after fixing those problems. Sending a flawed manuscript to another journal wastes time and risks accumulating a reputation for poor-quality submissions. If the rejection was based on scope mismatch or insufficient novelty for that specific journal, the manuscript may be ready for a more appropriate venue without major changes.
Select a new target journal strategically. Do not simply work down your list from highest to lowest impact factor. Consider which journal's scope, audience, and recent publications best match your revised manuscript. A well-targeted submission to a mid-tier journal is far more efficient than serial rejections at journals that are unlikely to accept the work.
Revise thoroughly. Address every substantive concern raised by the previous reviewers, even though you are submitting to a different journal. The peer review community is small, the same reviewers may evaluate your manuscript again. If they see the same problems they flagged previously, rejection is virtually certain. Use the reviewer feedback as a roadmap for strengthening the manuscript.
Consider professional revision support. When reviewer feedback identifies issues outside your expertise, statistical reanalysis, reporting compliance, methodology strengthening, professional support can fill the gap efficiently. Research Gold's reviewer response service helps researchers address complex reviewer demands and prepare manuscripts for successful resubmission. Whether the issue is PRISMA compliance, risk of bias methodology, or statistical modeling, targeted expert input often makes the difference between another rejection and acceptance.
Respond to "major revision" decisions with urgency and thoroughness. A major revision decision is not a rejection, it is an invitation. The editor and reviewers see potential in your work and are willing to invest additional review time. Studies suggest that 40-60% of manuscripts receiving major revision decisions are ultimately accepted when authors address all comments comprehensively. Treat every reviewer comment with respect, even those you disagree with. Provide point-by-point responses, reference exact page and line numbers for changes, and include a cover letter that summarizes the key revisions.
Learn from the experience. Track which journals rejected your manuscript, what reasons were given, and how long the process took. Over time, this data reveals patterns, perhaps your methods need a specific type of justification, or your Discussion sections tend toward overclaiming. Pattern recognition transforms rejection from a random setback into a systematic improvement process.
Manuscript rejection is a universal experience in academic publishing. The researchers who publish successfully are not those who avoid rejection, they are those who understand why manuscripts get rejected, take systematic steps to prevent the most common failures, and respond to rejection with strategic revision rather than discouragement. By addressing the patterns described in this guide, from scope matching and reporting compliance to statistical rigor and precise conclusions, you position your manuscript for the strongest possible outcome at every submission.