Unsupervised machine learning is a task of modeling the underlying structure of ``unlabeled data''. Since learning algorithms have been incorporated into many real-world applications, validating the implementations of those algorithms becomes much more important in the aim of software quality assurance. However, validating unsupervised machine learning programs is challenging because there lacks of priori knowledge. Along this line, in this paper, we present a metamorphic testing based method for validating and characterizing unsupervised machine learning programs, and conduct an empirical study on a real-world machine learning tool. The results demonstrate to what extent a program may fit to the profile of a specific scenario, which help end-users or software practitioners comprehend its performance in a vivid and light-weight way. And the experimental findings also reveal the gap between theory and implementation in a software artifact which could be easily ignored by people without much practical experience. In our method, metamorphic relations can serve as one type of quality measure, and one type of guidelines for selecting suitable programs.