Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms. Such interest has grown up in the area of music information retrieval (MIR) as well, particularly in music classification tasks such as auto-tagging. While a number of promising results have been shown, it is not well understood what acoustic sense the learned feature representations have and how they are associated with semantic meaning of music. In this paper, we attempt to demystify the learned audio features using a bag-of-features model with two learning stages. The first stage learns to project local acoustic patterns of musical signals onto a high-dimensional sparse space in an unsupervised manner and summarizes an audio track as a bag-of-features. The second stage maps the bag-of-features to semantic tags using deep neural networks in a supervised manner. For the first stage, we focus on analyzing the learned local audio features by quantitatively measuring the acoustic properties and interpreting the statistics in semantic context. For the second stage, we examine training choices and tuning parameters for the neural networks and show how the deep representations of bag-of-features become more discriminative. Through this analysis, we not only provide better understanding of learned local audio features but also show the effectiveness of the deep bag-of-features model in the music auto-tagging task.