In this paper we develop flexible Krylov methods for efficiently computing regularized solutions to large-scale linear inverse problems with an \(\ell_2\) fit-to-data term and an \(\ell_p\) penalization term, for \(p\geq 1\). First we approximate the \(p\)-norm penalization term as a sequence of \(2\)-norm penalization terms using adaptive regularization matrices in an iterative reweighted norm fashion, and then we exploit flexible preconditioning techniques to efficiently incorporate the weight updates. To handle general (non-square) \(\ell_p\)-regularized least-squares problems, we introduce a flexible Golub-Kahan approach and exploit it within a Krylov-Tikhonov hybrid framework. The key benefits of our approach compared to existing optimization methods for \(\ell_p\) regularization are that efficient projection methods replace inner-outer schemes and that expensive regularization parameter selection techniques can be avoided. Theoretical insights are provided, and numerical results from image deblurring and tomographic reconstruction illustrate the benefits of this approach, compared to well-established methods. Furthermore, we show that our approach for \(p=1\) can be used to efficiently compute solutions that are sparse with respect to some transformations.