Basics of Receiver Operating Characteristics (ROC) Curve

A Receiver Operating Characteristics (ROC) Curve is used to describe the trade-off between correct classifications and wrong classifications.

The ROC curve displays a plot of the True Positive (TP) against the False Positive (FP).

The performance of a classifier is represented as a point in the curve.

The total performance of a classifier is summarized over all possible threshold in the curve. The overall performance is given by area under the curve (AUC).

A high-performing model will have an ROC that will pass close to the upper left side of the curve and provide a large area under it. This is shown in Figure 1.

ROC Curve
Figure 1: ROC Curve

So we can see that the larger the AUC, the better the classifier will be.

For the classifier in Figure 1, the AUC is about 0.85 which is close to 1 and therefore would be considered to be a very good classifier.

A classifier with an AUC of 0.5(the blue line in Figure 1) is considered to be a ‘no-information’ or probabilistic classifier.


Specificity and Sensitivity

Adjusting the classifier threshold also changes the true positive rate (TPR) and the false positive rate (FPR). The true positive rate is known as the sensitivity of the classifier. It measures the proportion of times the classifier correctly classified the positive classes.

One minus the Sensitivity (1-sensitivity) is known as the Specificity. This is the same as the True Negative Rate. It is a measure of how many time the classifier correctly classified a negative class.

We would now take an example.


How to Construct an ROC Curve

We would use a classifier that classifies 10 observations as either positive or negative.  This is shown in Table 1.

It performs the classification by producing the posterior probability of the class (+ or -) given the observation. That is  P(+ | A) where A is the observation.


ROC Table
Table 1: ROC Output for 10 Observations

From Table 1:

For observation 1, the P(+|A) is 0.95. This means the classifier correctly classifies it. Same for observation 2. But for observation 3, the classifier wrongly classifier it as it classified it as + with P(+|A) of 0.85. Same with observation 4 and 5 and so on.

Now for observation 10, it give a P(+|A) of 0.25, hence it correctly classifies it.


Steps in construction an ROC curve

  • Begin with a classifier that gives the posterior probability for each observation P(+|A)
  • Sort the observations in descending order of their P(+|A)
  • Apply some threshold at each unique value of P(+|A)
  • Count the number of TP, FP, TN and FN at each threshold
  • Calculate the True Positive Rate TPR = TP/(TP+FN)
  • Calculate the False Positive Rate FPR = FP/(FP + TN)


Kindson Munonye is currently completing his doctoral program in Software Engineering in Budapest University of Technology and Economics

View all posts by kindsonthegenius →

Leave a Reply

Your email address will not be published. Required fields are marked *