# How to interpret the sample forest plot

Interpreting the forest plot involves two steps:

- Determine the effect size and
- Assess the level of difference (or heterogeneity) among the different trials that are included in the meta-analysis

##### Determine the effect size

In the example, all of the lines fall on the left-hand side of the graph (labelled ‘Favours experimental’), which tells us that, in each of the trials, the participants who received the intervention showed or reported bigger changes than the participants who received the control condition (the control condition may have been another intervention or no intervention at all).

The **black diamond** sits about half way between 0 and -1, which means that the average **effect size** of the three trials is about -0.5.

For a more precise idea of the average effect size of the three trials, the actual number is reported in the table in boldface type, under the ‘Std. Mean Difference’ column. In this case, the actual average effect size is -0.42. According to a common interpretation of effect sizes, this would suggest that the intervention being tested in these three studies had a small to medium effect size – in other words, ‘it worked’ and had a moderate effect.

In addition to the effect size, it is also important to consider the level of heterogeneity in a meta-analysis, which is captured in the **I ^{2} statistic** (which can be found at the bottom of the table in the example forest plot).

##### Assess heterogeneity (or difference) among the studies

Systematic reviews and meta-analyses aim to capture the overall effects of an intervention or treatment when it has been tested in multiple trials. Ideally, if multiple trials are testing the same intervention, the effects of the intervention should be consistent across all of the studies. Unfortunately, this is rarely the case, because many things can affect the results of a trial, such as researcher bias, problems with data collection, or any number of other things.

So a systematic review and meta-analysis are designed to ask the question: If these studies are all testing the same intervention, why don’t they get the same results? Are the differences caused by chance, or is there something else involved? If it is the former, then we can have confidence in the results of the meta-analysis. If the differences are not the result of chance, then we need to be cautious in interpreting the results of the meta-analysis.

Fortunately, it is easy to tell if heterogeneity is due to chance (or not) by interpreting the **I ^{2} statistic**. The I

^{2}statistic can be found at the bottom of the table in a forest plot. An I

^{2}statistic of more than 50% is considered high.

In our example forest plot, I^{2 }= 0%, so we can have confidence that the effects of the intervention being tested – which have a moderate effect size (-0.42) – are accurate and can be trusted. If the I^{2 }statistic were more than 50%, we would be less sure that the intervention can consistently have a moderate effect, and we might want to read the rest of the study to see if the authors report on why the effects are so different across studies. This can help you to determine, for example, with whom the intervention worked (e.g. who were the participants?) and to find out other details that might help you make a decision about whether the intervention has been tested with people or in places that are similar to your own population, clients or context.