We revisit data fusion, i.e., the problem of integrating noisy data from multiple sources by estimating the source accuracies, and show that the simple model of logistic regression can capture most existing approaches for solving data fusion. This allows us to put data fusion on a solid statistical footing and obtain solutions with rigorous theoretical guarantees. Expanding on logistic regression, we introduce \model, a framework that converts data fusion to a learning and inference problem over discriminative probabilistic models. In contrast to previous approaches that rely on complex generative models, discriminative models allow us to decouple the specification of a data fusion model from the algorithm used to learn the model's parameters. This allows us to extend data fusion to take into account domain-specific features that are indicative of the accuracy of data sources, and design data fusion approaches that yield source accuracy estimates with \(5\times\) lower error than competing baselines. We also design an optimizer to automatically select the best algorithm for learning the model's parameters. We validate our optimizer on multiple real datasets and show that it chooses the best algorithm for learning in almost all cases.