What are the indicators of test reliability?
Four indicators are most commonly used to determine the reliability of a clinical laboratory test. Two of these, accuracy and precision, reflect how well the test method performs day to day in a laboratory. The other two, sensitivity and specificity, deal with how well the test is able to distinguish disease from absence of disease.
The accuracy and precision of each test method are established and are frequently monitored by the professional laboratory personnel. Sensitivity and specificity data are determined by research studies and are generally found in medical literature. Although each test has its own performance measures and appropriate uses, laboratory tests are designed to be as precise, accurate, specific, and sensitive as possible. These basic concepts are the cornerstones of reliability of your test results and provide the confidence your health care provider has in using the clinical laboratory.
Accuracy and Precision
Statistical measurements of accuracy and precision reveal a lab test’s basic reliability. These terms are not interchangeable. A test method can be precise (reliably reproducible) without being accurate (measuring what it is supposed to measure), and vice versa.
Precision (Repeatability) A test method is said to be precise when repeated analyses on the same sample give similar results. When a test method is precise, the amount of random variation is small. The test method can be trusted because results are reliably reproduced time after time. Picture a dartboard with darts all clustered together – but not at the bull’s eye – and you see what a precise but inaccurate method produces: the method can be counted on to reach the same target over and over again, but the target may not be the right one!
Accuracy (Trueness) A test method is said to be accurate when it measures what it was supposed to measure – in technical terms, the test value approaches the absolute “true” value of the substance (analyte) being measured. Results from every test performed are compared to known “control specimens” that have undergone multiple evaluations and compared to the “gold” standard for that assay, thus analysed to the best testing standards available. Picture a dartboard with a dart right in the centre of the bull’s eye and you see what an accurate method produces: the method is capable of hitting the intended target.
When the method is both precise and accurate – bull’s eye every time!
Although a test that is 100% accurate and 100% precise is the ideal, in reality, tests,, instruments, and laboratory operations all contribute to small but measurable variations in results. The small amount of variability that typically occurs does not usually detract from the test’s value and statistically is insignificant. The level of precision and accuracy that can be obtained is specific to each test method but is constantly monitored for reliability through quality control and assessment procedures. When your blood is tested repeatedly your test results should change little unless your state of health has improved or deteriorated. A slightly larger difference in precision and accuracy can often be seen between two laboratories and therefore your results may vary somewhat more when the repeat test is performed by a different laboratory.
Sensitivity and Specificity
To be effective, a medical test is expected to detect abnormalities with a very high degree of confidence. How likely is it that an individual who has a positive test has the disease? What are the chances that an individual has a certain disorder even though the test for it was negative?
Sensitivity
Sensitivity is the ability of a test to correctly identify individuals who have a given disease or condition. For example, a certain test may have been shown to be 90% sensitive. If 100 people are known to have a certain disease, the test that identifies that disease will correctly do so for 90 of those 100 cases. The other 10 people tested will not show the expected result for this test. For that 10%, the finding of a “normal” result is a misleading false-negative result.
A test’s sensitivity becomes particularly important when you are seeking to exclude a dangerous disease, such as testing for the presence of the HIV antibody. Screening for HIV antibody often utilizes an ELISA (enzyme-linked immunoassay) test method, which has greater than 99% sensitivity. However, a person may get a false-negative result if they are tested too soon after the initial infection (less than 6 weeks). The result of a false-negative gives a person the sense of being disease-free when in fact they are not. The more sensitive a test, the fewer “false-negative” results will be produced.
Specificity
Specificity is the ability of a test to correctly exclude individuals who do not have a given disease or condition. For example, a certain test may have proven to be 90% specific. If 100 healthy individuals are tested with that method, only 90 of those 100 healthy people will be found “normal” (disease-free) by the test. The other 10 people (who do not have the disease) will appear to be positive for that test. For that 10%, their “abnormal” findings are a misleading false-positive result. When it is necessary to confirm a diagnosis that requires dangerous therapy, a test’s specificity is one of the crucial indicators. A patient who has been told that he’s positive for a specific test when he’s not may be subjected to potentially painful or dangerous treatment, additional expense, and unwarranted anxiety. The more specific a test, the fewer “false-positive” results it produces.
Developers and manufacturers of a new test must provide target values for test results and must provide evidence for the expected ranges as well as information on test limitations and other factors that could generate false results.