The new field of adaptive data analysis seeks to provide algorithms and provable guarantees for models of machine learning that allow researchers to reuse their data, which normally falls outside of the usual statistical paradigm of static data analysis. In 2014, Dwork, Feldman, Hardt, Pitassi, Reingold and Roth introduced one potential model and proposed several solutions based on differential privacy. In previous work in 2016, we described a problem with this model and instead proposed a Bayesian variant, but also found that the analogous Bayesian methods cannot achieve the same statistical guarantees as in the static case. In this paper, we prove the first positive results for the Bayesian model, showing that with a Dirichlet prior, the posterior mean algorithm indeed matches the statistical guarantees of the static case. We conjecture that this is true for any conjugate prior from the exponential family, but can only prove this in special cases. The main ingredient, Theorem 4, shows that the \(\text{Beta}(\alpha,\beta)\) distribution is subgaussian with variance proxy \(O(1/(\alpha+\beta+1))\), a concentration result also of independent interest. Unlike most moment-based concentration techniques, which bound the centered moments, our proof utilizes a simple condition on the raw moments of a positive random variable.