Test Sensitivity and Selectivity

This important concept needs to be thoroughly understood, as it is key to determining the validity of our tests (and thus how good their findings are) as to whether the test is reliable or not. Sensitivity is the percent of positives that a test correctly identifies (called true positives) while Selectivity is the percent of negatives that are correctly identified (true negatives). A good test is sensitive if it correctly diagnoses diseased patients (true positives), but it should also be able to specify, or select out, correctly people who do not have the disease (true negatives). These two concepts rarely are both true at the same time; e.g., a good COVID-19 test would be sensitive enough to correctly idendify COVID-infected patients at least 90% of the time (have a 90% true positive rate), but that means it would miss selecting infected patients10% of the time (10% false negative rate). Study this on Wikipedia or by looking up articles on “Sensitivity and Specificity” in any statistics book.

True Positives & True Negatives

Key to determining the validity of our tests (and thus their findings) is whether the test is reliable or not. One way of looking at the internal validity of a test is through its Sensitivity and Selectivity (or specificity).

  • Sensitivity is the percent of positives that a test correctly identifies (called true positives).
  • Selectivity is the percent of negatives that are correctly identified (true negatives).

A good test is sensitive if it correctly diagnoses diseased patients (true positives), but it should also be able to specify, or select out, correctly people who do not have the disease (true negatives). These two concepts rarely are both true at the same time; e.g., a good COVID-19 test should be sensitive enough to correctly identify COVID-infected patients at least 90% of the time (have ≥ 90% true positive rate), but that means it would miss selecting infected patients 10% of the time (≤10% false negative rate). Study this on Widkpedia by looking up the article on “Sensitivity and Specificty” or review any statistics book.

I developed a test to determine the best varsity basketball players in any high school – and it never fails, it has 100% sensitivity to detecting the best among all students. My test: I only choose students over five feet in height, both boys and girls among all the 11th and 12th graders. Since it detects pretty much all the students in 11th and 12 grades, it never misses or produces a false positive! In fact it has a 100% true positive rate in selecting players. Of course, the problem is it fails
to select out any bad players (Holy Cow, it doesn’t select out anyone!), thus it detects 0% of poor players (a 0% true negative rate) which is rotten. What good is a test that is sensitive (accurately picks out 100% of the best players) but does that by selecting anyone over five feet tall, which includes the entire Junior and Senior population at the school?

A coach also needs to rule out poor players; i.e., the ability to detect who is likely to be good (yield true positives) and who is NOT good (true negatives). The test the coach want should be sensitive to the criterion of who is good (yield true positives while missing as few good players as possible) – while generating as few false positives as possible. To do that, it also must also be selective so as to rule out as many bad players as possible (yield true negatives) without generating too many false negatives or calling a truly good player bad. My test doesn’t help the coach much, who already knows that the few best players will be among all the school’s students, Duh! The problem is figuring out a test that finds the truly good players (true positives) as well as the truly bad players are (true negatives) so as to keep them off the team. Clear as mud, right! Well, keep at it.

RapidWeaver Icon

Made in RapidWeaver