Distributed lag models (DLMs) express the cumulative and delayed dependence between pairs of time-indexed response and explanatory variables. In practical application, users of DLMs examine the estimated influence of a series of lagged covariates to assess patterns of dependence. Much recent methodological work has sought to de- velop flexible parameterisations for smoothing the associated lag parameters that avoid overfitting. However, this paper finds that some widely-used DLMs introduce bias in the estimated lag influence, and are sensitive to the maximum lag which is typically chosen in advance of model fitting. Simulations show that bias and misspecification are dramatically reduced by generalising the smoothing model to allow varying penalisation of the lag influence estimates. The resulting model is shown to have substantially fewer effective parameters and lower bias, providing the user with confidence that the estimates are robust to prior model choice.