F1 score

From Infogalactic: the planetary knowledge core
(Redirected from F1 Score)
Jump to: navigation, search

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

In statistical analysis of binary classification, the F1 score (also F-score or F-measure) is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results, and r is the number of correct positive results divided by the number of positive results that should have been returned. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0.

The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:

F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}.

The general formula for positive real β is:

F_\beta = (1 + \beta^2) \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{(\beta^2 \cdot \mathrm{precision}) + \mathrm{recall}}.

The formula in terms of Type I and type II errors:

F_\beta = \frac {(1 + \beta^2) \cdot \mathrm{true\ positive} }{(1 + \beta^2) \cdot \mathrm{true\ positive} + \beta^2 \cdot \mathrm{false\ negative} + \mathrm{false\ positive}}\,.

Two other commonly used F measures are the F_{2} measure, which weights recall higher than precision, and the F_{0.5} measure, which puts more emphasis on precision than recall.

The F-measure was derived so that F_\beta "measures the effectiveness of retrieval with respect to a user who attaches β times as much importance to recall as precision".[1] It is based on Van Rijsbergen's effectiveness measure

E = 1 - \left(\frac{\alpha}{P} + \frac{1-\alpha}{R}\right)^{-1}.

Their relationship is F_\beta = 1 - E where \alpha=\frac{1}{1 + \beta^2}.

Diagnostic Testing

This is related to the field of binary classification where recall is often termed as Sensitivity. There are several reasons that the F1 score can be criticized in particular circumstances.[2]


Predicted condition
Total population Predicted Condition positive Predicted Condition negative Prevalence = <templatestyles src="Sfrac/styles.css" />Σ Condition positive/Σ Total population
True
condition
condition
positive
True positive False Negative
(Type II error)
True positive rate (TPR), Sensitivity, Recall = <templatestyles src="Sfrac/styles.css" />Σ True positive/Σ Condition positive False negative rate (FNR), Miss rate = <templatestyles src="Sfrac/styles.css" />Σ False negative/Σ Condition positive
condition
negative
False Positive
(Type I error)
True negative False positive rate (FPR), Fall-out = <templatestyles src="Sfrac/styles.css" />Σ False positive/Σ Condition negative True negative rate (TNR), Specificity (SPC) = <templatestyles src="Sfrac/styles.css" />Σ True negative/Σ Condition negative
Accuracy (ACC) = <templatestyles src="Sfrac/styles.css" />Σ True positive + Σ True negative/Σ Total population Positive predictive value (PPV), Precision = <templatestyles src="Sfrac/styles.css" />Σ True positive/Σ Test outcome positive False omission rate (FOR) = <templatestyles src="Sfrac/styles.css" />Σ False negative/Σ Test outcome negative Positive likelihood ratio (LR+) = <templatestyles src="Sfrac/styles.css" />TPR/FPR Diagnostic odds ratio (DOR) = <templatestyles src="Sfrac/styles.css" />LR+/LR−
False discovery rate (FDR) = <templatestyles src="Sfrac/styles.css" />Σ False positive/Σ Test outcome positive Negative predictive value (NPV) = <templatestyles src="Sfrac/styles.css" />Σ True negative/Σ Test outcome negative Negative likelihood ratio (LR−) = <templatestyles src="Sfrac/styles.css" />FNR/TNR

Applications

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.[3] Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[4] and so F_\beta is seen in wide application.

The F-score is also used in machine learning.[5] Note, however, that the F-measures do not take the true negatives into account, and that measures such as the Phi coefficient, Matthews correlation coefficient, Informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier.[2]

The F-score has been widely used in the natural language processing literature, such as the evaluation of named entity recognition and word segmentation.

G-measure

While the F-measure is the harmonic mean of Recall and Precision, the G-measure is the geometric mean.[2]

G =  \sqrt{\mathrm{precision} \cdot \mathrm{recall}}.

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 2.2 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. See, e.g., the evaluation of the CoNLL 2002 shared task.

de:Beurteilung eines Klassifikators#Kombinierte Maße