Thursday, April 26, 2012

HIV test are 99% accurate?

When they say the tests are '99% accurate', what they mean is 99 times out of 100 it will give the same result if the SAME TEST is given to the same person. It does not mean it's 99% accurate in detecting HIV, which as you pointed out the test does not actually do. 
But what do most people think when they go to the testing site and are told the test is 99% accurate? You and I both know what they think, and it's one of the dirtiest mind tricks of the HIV shell game. reprinted from
When they use the term "accuracy" in regards to HIV tests, they're referring to overall accuracy. The dirty trick here is that they in turn use the overall accuracy to imply that a positive test is over 99% accurate, when this is far from the case. Indeed, if there were no such thing as HIV (meaning that a positive test would have 0% accuracy), the overall accuracy of an HIV test would still be >99%, because >99% of all individuals tested would get a true negative reading.


Prior to doing actual mathematical analysis of the subject, I'd never really thought much about "accuracy" in HIV testing, because the term "accuracy" is not really an accepted term in HIV testing, because the accuracy of positive and negative tests will typically not be the same. Thus, in order to truly gauge the accuracy of an HIV (or for that matter any) test, two terms are required, to describe the accuracy of both positive and negative tests, and the proper terms used are "sensitivity" and "specificity".

The concepts of sensitivity and specificity may seem hard to understand at first -- I know, because I struggled with them myself -- but they are actually very easy to understand, once you wrap your head around them. The real difficulty in understanding the concepts is in the fact that it's difficult to understand an explanation of the concepts. In the interest of keeping the explanation simple, I shall give one definition and one simple example for each:

Sensitivity: The ability of the test to correctly identify a "positive" condition, expressed as a percentage. EXAMPLE: If you tested 1000 people who actually do have HIV, using a test with 99% sensitivity, then 990 would get a "positive" reading (i.e. a true positive), and the other 10 would get a "negative" (i.e. false negative) reading.

Specificity: The ability of the test to correctly identify a "negative" condition, expressed as a percentage. EXAMPLE: If you tested 1000 people who actually don't have HIV, using a test with 99% specificity, then 990 would get a "negative" reading (i.e. a true negative), and the other 10 would get a "positive" (i.e. false positive) reading.

Still confused? I know I was by this point, so don't sweat it. To further explore the concepts, let's take a look at how they apply in the real world.

The laws of conditional probability that govern tests which return a "yes-or-no" result are known as "Baye's Theorem". Don't let the name intimidate you -- Bayesian math is actually quite easy to comprehend, once you understand the concepts of sensitivity and specificity.

Let's say, for example, that you had a population of 100,000 people, 10% (10,000) of which smoke marijuana, and you test them for THC, using a test with 95% sensitivity and 99% specificity.

First, let's test the marijuana users: 10,000 X 95% sensitivity = 9,500 true positives, with 500 false negatives remaining.

Now, let's test the non-users: 90,000 X 99% specificity = 89,100 true negatives, with 900 false positives remaining.

At this point, I want you to note the difference in accuracy between the positive and negative tests, and the overall accuracy of the test.

A total of 10,400 people tested positive, with 9,500 of those being accurate results. 9500 / 10,400 = 91.34% accuracy for a positive test.

A total of 89,600 people tested negative, with 89,100 of those being accurate results. 89,100 / 89,600 = 99.44% accuracy for a negative test.

Now, a total of 100,000 people were tested, and 98,600 (98.6%) got an accurate result.

So, the accuracy of a positive test is 91.34%; the accuracy of a negative test is 99.44%; and the overall accuracy of the test is 98.6%.

An extremely common fallacy in testing is to gauge the accuracy of a positive test according to the overall accuracy, rather than the actual accuracy (known as “Positive Predictive Value” or PPV) of a positive test. In the above example, it doesn’t matter much, because the difference between 98.6% accuracy and 91.34% just isn't enough to be considered very consequential.

Which leads to the second thing you need to know about Bayesian mathematics: The accuracy of a given test is in part governed by the actual prevalence of the condition for which you're testing.

Let's say, for example, that you repeated the above example study, except that this time, you were working with a population of 100,000 in which only one half of one per cent (0.5%, or 500,) are actually using marijuana.

As before, first we'll test the stoners: 500 X 95% sensitivity = 475 true positives, plus 25 false negatives.

Now, the straights: 99,500 X 99% specificity = 98,505 true negatives, plus 995 false positives.

Note that this time there are more than twice as many false positives as true positives. In fact, there are nearly twice as many false positives as there are people who actually use marijuana in the sample.

This means that a positive result has an accuracy of only 32.3%, meaning, in turn, that a positive result will be a false positive twice as often as it will be correct.

However, check out what happens when we compute accuracy: 98,980 accurate results (98,505 true negatives + 475 true positives) / 100,000 = 98.98% overall accuracy.

So, despite the fact that two out of three positive tests are wrong, the overall accuracy of the test is actually higher than it was when the positive results were more accurate.

How does this happen? It's simply the law of averages. The overall accuracy percentage is basically an average between the accuracy of positive and negative results, weighted towards whichever condition is more prevalent (in this case, true negatives.) Of the negative results obtained, 98,505 out of 98,530 were true negatives, meaning that the accuracy of a negative test was 99.975%. Thus, because the vast majority of test results are true negatives, the overall accuracy of the test appears high, despite the fact that a positive test is twice as likely to be wrong as to be right.

Baye's Theorem dictates that when mass testing is done in a population with a low prevalence of the condition for which we are testing, there will be large numbers of false positives. In the above examples, we see that when prevalence is somewhat low (10%), the accuracy of a positive test is high, though not perfect (91.34%). However, in the second example, when prevalence was much lower, there were actually more false positives than true positives, even though we were using the exact same test.

In fact, it was no coincidence that there were twice as many false positives as true positives, since the specificity gap (1%) was twice the actual prevalence of 0.5%. Whenever the specificity gap (100% - specificity) is equal to the actual prevalence, half of all positives will be false positives. If the gap is twice the actual prevalence, then there will be roughly twice as many false positives as true positives.

For this reason, the accuracy of a positive HIV test is much lower than is frequently claimed, and is in fact misrepresented by substituting overall accuracy in the place of the much lower accuracy of a positive test.

And this is the dirtiest mind trick of the HIV shell game.

No comments:

Post a Comment