In recent years, there has been a prominent discussion in the literature about the potential for overestimation of the treatment effect when a clinical trial stops at an interim analysis due to the experimental treatment showing a benefit over the control. However, there has been much less attention paid to the converse issue, namely, that sequentially monitored clinical trials which did not stop early for benefit tend to underestimate the treatment effect. In meta-analyses of many studies, these two sources of bias will tend to balance each other to produce an unbiased estimate of the treatment effect. However, for the interpretation of a single study in isolation, underestimation due to interim analysis may be an important consideration. In this paper, we discuss the nature of this underestimation, including theoretical and simulation results demonstrating that it can be substantial in some contexts. Furthermore, we show how a conditional approach to estimation, in which we condition on the study reaching its final analysis, may be used to validly inflate the observed treatment difference from a sequentially monitored clinical trial. Expressions for the conditional bias and information are derived, and these are used in supplied R code that computes the bias-adjusted estimate and an associated confidence interval. As well as simulation results demonstrating the validity of the methods, we present a data analysis example from a pivotal clinical trial in cardiovascular disease. The methods will be most useful when an unbiased treatment effect estimate is critical, such as in cost-effectiveness analysis or risk prediction.