Risk ratio (also called relative risk) is the ratio of the probability of an event in an exposed group to the probability of the same event in a comparison group, computed as the cumulative incidence in the exposed group divided by the cumulative incidence in the unexposed group. It is the effect-size estimator used by cohort studies, randomized controlled trials, and any analysis where the underlying study design produces incidence data rather than odds. This guide covers the definition with notation, the worked computation from a 2x2 table, the confidence interval on the log scale, the interpretation of values above and below 1, the critical distinction between risk ratio and odds ratio, the connection to hazard ratio and risk difference, the regression methods for adjusted risk ratios, and the most common reporting errors that peer reviewers flag in 2026.
Risk ratio is one of three closely related effect-size estimators reviewers encounter in binary-outcome analyses: risk ratio, odds ratio, and hazard ratio. The three are not interchangeable. The choice among them is determined by study design, by the timing of outcome assessment, and by whether the outcome is rare or common. A risk ratio reported when an odds ratio was the correct estimator, or vice versa, leads to misinterpretation of the magnitude of effect. The distinction is one of the most-tested concepts in epidemiology examinations and one of the most-corrected errors in peer review.
Risk Ratio Versus Odds Ratio: When Each Estimator Is the Right Choice
The single most important distinction reviewers must understand is when to use a risk ratio and when to use an odds ratio. The two estimators are mathematically related and numerically similar when the outcome is rare, but they diverge when the outcome is common, and they are produced by different study designs.
Risk ratio requires incidence data. A risk ratio can only be computed when the analyst knows the size of the at-risk population (the denominator of the risk calculation). This is the case in cohort studies, where a known group is followed forward in time, and in randomized controlled trials, where the trial protocol fixes the group size at randomization. In these designs, dividing event counts by group totals gives valid probabilities (the cumulative incidence), and the ratio of those probabilities is the risk ratio.
Odds ratio is required when only sampled cases and controls are available. In a case-control study, the investigator samples a fixed number of cases and a fixed number of controls and asks about exposure status retrospectively. The group denominators are imposed by the design, not by the underlying population, so probabilities (and therefore risks) cannot be computed directly. The odds ratio, which is the ratio of the odds of exposure among cases to the odds of exposure among controls, is the estimator that the design supports. The odds ratio from a case-control study approximates the risk ratio that would have been computed from the equivalent cohort study, but only when the rare-disease assumption holds (the outcome is rare in both exposed and unexposed groups, typically below 10 percent).
When the outcome is common, the odds ratio overestimates the risk ratio. A randomized controlled trial reporting a 50 percent event rate in the control group and a 25 percent event rate in the treatment group has a risk ratio of 0.5 (the treatment halves the risk) but an odds ratio of 0.33 (the odds in the treated group are one third of the odds in the control). The odds ratio is more extreme than the risk ratio in both directions when outcomes are common. Reporting the odds ratio as if it were the risk ratio, or interpreting the magnitude of an odds ratio as if it were a risk ratio, exaggerates the perceived effect. The default in trials with binary outcomes should be the risk ratio; the odds ratio is reported only when a specific reason demands it (e.g., logistic regression was used because of covariate adjustment and a binomial likelihood would not converge).
The two estimators are related by the identity OR = RR × (1 - R0) / (1 - R1), where R0 and R1 are the risks in the two groups. The identity tells you that when R0 and R1 are both small, OR and RR are similar; when they are large, OR is more extreme than RR.
Worked Computation From a 2x2 Table
Consider a randomized trial of 2,000 participants assigned 1:1 to treatment and control. After follow-up, the outcome (a specific adverse event) occurred in 50 of 1,000 treated participants and 25 of 1,000 control participants. The 2x2 table is:
| Event | No event | Total | |
|---|---|---|---|
| Treatment | 50 | 950 | 1,000 |
| Control | 25 | 975 | 1,000 |
Using the conventional notation a, b, c, d for the four cell counts (a = 50, b = 950, c = 25, d = 975):
Risk in the treatment group: R1 = a / (a + b) = 50 / 1,000 = 0.050 (5.0 percent).
Risk in the control group: R0 = c / (c + d) = 25 / 1,000 = 0.025 (2.5 percent).
Risk ratio: RR = R1 / R0 = 0.050 / 0.025 = 2.0.
The treatment doubles the risk of the adverse event compared to control. Note that the risk difference (a separate but related effect size) is 0.050 - 0.025 = 0.025, or 2.5 percentage points; the risk ratio and risk difference together give the complete picture (the relative and absolute change). The relationship to the effect size calculation for meta-analysis framework is direct: the log risk ratio and its sampling variance are the standard inputs to a random-effects meta-analysis of binary outcomes.
Confidence Interval on the Log Scale
The risk ratio is asymmetric on the original (linear) scale: it can range from 0 to infinity, with 1 as the null. To construct a symmetric confidence interval, the standard approach is to compute the log risk ratio, build a symmetric confidence interval around the log value, and exponentiate back to the original scale. The standard error of the log risk ratio is:
SE(log RR) = sqrt[ 1/a - 1/(a+b) + 1/c - 1/(c+d) ]
For the worked example: SE(log RR) = sqrt(1/50 - 1/1000 + 1/25 - 1/1000) = sqrt(0.020 - 0.001 + 0.040 - 0.001) = sqrt(0.058) = 0.241.
The log RR is ln(2.0) = 0.693. The 95 percent confidence interval on the log scale is 0.693 plus or minus 1.96 times 0.241, giving (0.221, 1.165). Exponentiating: exp(0.221) = 1.247 and exp(1.165) = 3.205. So the 95 percent confidence interval on the risk ratio is (1.25, 3.21), and because the interval excludes 1.0, the difference is statistically significant at conventional thresholds. The complete derivation and intuition for confidence intervals on the log scale covers why the log transform is necessary and how to read it on a forest plot.
Interpretation: RR = 1, RR > 1, RR < 1
The interpretation of the risk ratio depends on its value relative to the null:
RR = 1.0 (null value). The event probability in the exposed group equals the event probability in the unexposed group. The exposure has no association with the outcome at the population level.
RR > 1.0 (positive association). The exposed group has a higher event probability than the unexposed group. The exposure is a risk factor if the outcome is undesirable. For RR = 1.5 the exposure is associated with a 50 percent relative increase in risk; for RR = 2.0 the risk is doubled; for RR = 3.0 the risk is tripled. The clinically important threshold depends on the baseline risk: doubling a 1 percent risk to 2 percent (RR = 2.0) is a 1 percentage-point absolute increase, while doubling a 30 percent risk to 60 percent is a 30 percentage-point absolute increase. Always report the absolute risk alongside the relative risk for clinical interpretation.
RR < 1.0 (negative association or protective effect). The exposed group has a lower event probability than the unexposed group. For RR = 0.5 the exposure is associated with a 50 percent relative reduction in risk; for RR = 0.8 with a 20 percent reduction. If the exposure is a treatment in a trial and the outcome is undesirable (death, recurrence, adverse event), RR less than 1 indicates a beneficial treatment effect.
The magnitude of RR > 1 versus RR < 1 is asymmetric in interpretation: a risk ratio of 2.0 is the reciprocal of 0.5, so an exposure with RR = 2.0 for the outcome of interest has RR = 0.5 in the mirror-image comparison (swapping the exposed and unexposed groups). When meta-analyzing studies that report RRs in different directions, always re-orient them so the reference direction is consistent across studies.