This paper revisits the concept of composite likelihood from the perspective of probabilistic inference, and proposes a generalization called super composite likelihood for sharper inference in multiclass problems. It is argued that, beside providing a new interpretation and a general justification of na\"ive Bayes procedures, super composite likelihood yields a much wider class of discriminative models suitable for unsupervised and weakly supervised learning.