Objective: To understand the impact of varying measurement period on the calculation of electronic Clinical Quality Measures (eCQMs).
Background: eCQMs have increased in importance in value-based programs, but accurate and timely measurement has been slow. This has required flexibility in key measure characteristics, including measurement period, the timeframe the measurement covers. The effects of variable measurement periods on accuracy and variability are not clear.
Methods: 209 practices were asked to extract and submit four eCQMs from their Electronic Health Records on a quarterly basis using a 12-month measurement period. Quarterly submissions were collected via REDCap. The measurement periods of the survey data were categorized into non-standard (3, 6, 9 months and other) and standard periods (12 months). For comparison, patient-level data from three clinics were collected and calculated in an eCQM registry to measure the impact of varying measurement periods. We assessed the central tendency, shape of the distributions, and variability across the four measures. Analysis of variance (ANOVA) was conducted to analyze the differences among standard and non-standard measurement period means, and variation among these groups.
Results: Of 209 practices, 191 (91 percent) submitted data over three quarters. Of the 546 total submissions, 173 had non-standard measurement periods. Differences between measures with standard versus non-standard periods ranged from –3.3 percent to 14.2 percent between clinics (p < .05 for 3 of 4), using the patient-level data yielded deltas of –1.6 percent to 0.6 percent when comparing non-standard and standard periods.
Conclusion: Variations in measurement periods were associated with variation in performance between clinics for 3 of the 4 eCQMs, but did not have significant differences when calculated within clinics. Variations from standard measurement periods may reflect poor data quality and accuracy.