Author(s): Ninet Sinaii, Sean D Cleary and Pamela Stratton
Background:
A simple sensitivity analysis technique was developed to assess the impact of misclassification and verify observed prevalence differences between distinct populations.
Methods:
The prevalence of self-reported comorbid diseases in 4,331 women with surgically-diagnosed endometriosis was compared to published clinical and population-based prevalence estimates. Disease prevalence misclassification was assessed by assuming over-reporting in the study sample and under-reporting in the general (comparison) population. Over- and under-reporting by 10%, 25%, 50%, 75%, and 90% was used to create a 5×5 table for each disease. The new prevalences represented by each table cell were compared by p-values, prevalence odds ratios, and 95% confidence intervals.
Results:
Three misclassification patterns were observed: 1) differences remained significant except at high degrees (>50%) of misclassification; 2) minimal (10%) misclassification negated any observed difference; and 3) with some (25-50%) misclassification, the difference disappeared, and the direction of significance changed at higher levels (>50%).
Conclusions:
This sensitivity analysis enabled us to verify observed prevalence differences. This useful, simple approach is for comparing prevalence estimates between distinct populations.