ABSTRACT: Any result can be generated randomly and any random result is useless. Traditional methods define uncertainty as a measure of the dispersion around the true value and are based on the hypothesis that any divergence from uniformity is the result of a deterministic event. The problem with this approach is that even non-uniform distributions can be generated randomly and the probability of this event rises as the number of hypotheses tested increases. Consequently, there is a risk of considering a random and therefore non-repeatable hypothesis as deterministic. Indeed, it is believed that this way of acting is the cause of the high number of non-reproducible results. Therefore, we believe that the probability of obtaining an equal or better result randomly is the true uncertainty of the statistical data. Because it represents the probability that the data is useful and therefore the validity of any other analysis depends on this parameter.