Misuses and misconceptions about statistical testing are widespread in the field of public health. Specifically, the dichotomous use of the P-value (e.g., deemed significant if P<.05 and non-significant if P>.05), coupled with i) nullism (an obsession with the null hypothesis over other hypotheses), ii) failure to validate the statistical model adopted, iii) failure to distinguish between significance and effect size, and iv) failure to distinguish between statistical and empirical levels, creates an extremely fertile ground for overestimating the level of evidence found and drawing scientifically unfounded or incorrect conclusions. For these reasons, widely acknowledged and discussed in statistical literature, this article proposes a framework that aims to both help the reader understand the epistemological boundaries of the statistical approach and provide a structured workflow for conducting a statistical analysis capable of appropriately informing public health decisions. In this regard, novel concepts of multiple compatibility intervals and multiple surprisal intervals are discussed in detail through straightforward examples.