Platt scaling
Machine learning and data mining 

Machine learning venues

In machine learning, Platt scaling or Platt calibration is a way of transforming the outputs of a classification model into a probability distribution over classes. The method was invented by John Platt in the context of support vector machines,^{[1]} replacing an earlier method by Vapnik, but can be applied to other classification models.^{[2]} Platt scaling works by fitting a logistic regression model to a classifier's scores.
Contents
Description
Consider the problem of binary classification: for inputs x, we want to determine whether they belong to one of two classes, arbitrarily labeled +1 and −1. We assume that the classification problem will be solved by a realvalued function f, by predicting a class label y = sign(f(x)).^{[loweralpha 1]} For many problems, it is convenient to get a probability P(y=1x), i.e. a classification that not only gives an answer, but also a degree of certainty about the answer. Some classification models do not provide such a probability, or give poor probability estimates.
Platt scaling is an algorithm to solve the aforementioned problem. It produces probability estimates
 ,
i.e., a logistic transformation of the classifier scores f(x), where A and B are two scalar parameters that are learned by the algorithm. Note that predictions can now be made according to y = 1 iff P(y=1x) > ½; if B ≠ 0, the probability estimates contain a correction compared to the old decision function y = sign(f(x)).^{[3]}
The parameters A and B are estimated using a maximum likelihood method that optimizes on the same training set as that for the original classifier f. To avoid overfitting to this set, a heldout calibration set or crossvalidation can be used, but Platt additionally suggests transforming the labels y to target probabilities
 for positive samples (y = 1), and
 for negative samples, y = 1.
Here, N₊ and N₋ are the number of positive and negative samples, resp. This transformation follows by applying Bayes' rule to a model of outofsample data that has a uniform prior over the labels.^{[1]}
Platt himself suggested using the Levenberg–Marquardt algorithm to optimize the parameters, but a Newton algorithm was later proposed that should be more numerically stable.^{[4]}
Analysis
Platt scaling has been shown to be effective for SVMs as well as other types of classification models, including boosted models and even naive Bayes classifiers, which produce distorted probability distributions. It is particularly effective for maxmargin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with wellcalibrated models such as logistic regression, multilayer perceptrons and random forests.^{[2]}
An alternative approach to probability calibration is to fit an isotonic regression model to an illcalibrated probability model. This has been shown to work better than Platt scaling, in particular when enough training data is available.^{[2]}
See also
 Relevance vector machine: probabilistic alternative to the support vector machine
Notes
 ↑ See sign function. The label for f(x) = 0 is arbitrarily chosen to be either zero, or one.
References
 ↑ ^{1.0} ^{1.1} Platt, John (1999). "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods" (PDF). Advances in large margin classifiers. 10 (3): 61–74.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ ^{2.0} ^{2.1} ^{2.2} NiculescuMizil, Alexandru; Caruana, Rich (2005). Predicting good probabilities with supervised learning (PDF). ICML. doi:10.1145/1102351.1102430.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ Olivier Chapelle; Vladimir Vapnik; Olivier Bousquet; Sayan Mukherjee (2002). "Choosing multiple parameters for support vector machines" (PDF). Machine Learning. 46: 131–159. doi:10.1023/a:1012450327387.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
 ↑ Lin, HsuanTien; Lin, ChihJen; Weng, Ruby C. (2007). "A note on Platt's probabilistic outputs for support vector machines" (PDF). Machine Learning. 68 (3): 267–276. doi:10.1007/s1099400750186.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>