Consider nonparametric function estimation under \(L^p\)-loss. The minimax rate for estimation of the regression function over a H\"older ball with smoothness index \(\beta\) is \(n^{-\beta/(2\beta+1)}\) if \(1\leq p<\infty\) and \((n/\log n)^{-\beta/(2\beta+1)}\) if \(p=\infty.\) There are many known procedures that either attain this rate for \(p=\infty\) but are suboptimal by a \(\log n\) factor in the case \(p<\infty\) or the other way around. In this article, we construct an estimator that simultaneously achieves the optimal rates under \(L^p\)-risk for all \(1\leq p\leq \infty\) without prior knowledge of \(\beta.\) In contrast to classical wavelet thresholding methods that kill small empirical wavelet coefficients and keep large ones, it is essential for simultaneous adaptation that on each resolution level, the largest empirical wavelet coefficients are truncated. This leads to a completely different point of view on wavelet thresholding. The crucial part in the construction of the estimator is the size of the truncation level which is linked to the unknown smoothness index. Although estimation of the smoothness index is known to be a difficult task, there is a data-driven choice of the truncation level that is sufficiently precise for our purpose.