How Should We Define Quality?

By | October 14, 2020

As we continue to focus on improving the quality of care we deliver to those we serve, having benchmarks and external comparisons are essential. Granted we all aspire to zero harm; however, we are not here yet. Comparing our improvement to ourselves is laudable but does not give a clear indication of progress.

Unquestionably, for those of us in the quality domain, there are many external comparators to measure how we are doing. Whether it is CMS STARS, US News and World Report, Leapfrog, or a myriad of other agencies, they all define their scoring differently. One might surmise that if one does well according to a singular entity, they also do well in all the other areas. However, the literature does not support these assumptions. Furthermore, the correlations are surprisingly low. Public surveys show, the consumer has minimal interest in these “grading” organizations and instead, relies on word of mouth, which is troubling.  

This situation is not new, but as we learn about the drivers of quality, such as socioeconomic factors, we are in a position where the measuring does not even indicate a quality level that may be important, such as serving vulnerable populations. When the US News and World Report released their Best Hospital Honor Roll of 21 hospitals, only hospitals in areas with higher life expectancy made the grade leading one to postulate two possibilities; the first, that these hospitals are the cause of the high-life expectancy and the second, that the “test” does not consider healthcare’s regionality. Thus, leading the industry to ignore numerous hospitals that provide excellent care within the context of their own environments.

However, this recognition might seem irrelevant since these grading organizations provide their “services” for a reason. There is incentivization to have the public accept them as essential; otherwise, they have no reason to exist. Healthcare providers and administrators are incredibly competitive, and goals are set based on measurements. If these comparisons are not accurate, we run the risk of “skating to a quality puck” that is incorrect, thus not improving care and minimizing the work of many dedicated people.

Since incentives are tied to quality improvement and external measurements, the flaws can lead to health inequity and lower quality in general. Some possible adjustments may include accounting for population differences in the scoring. Additionally, move to ratings rather than rankings. Everyone should be able to obtain the highest score so we should quit grading on a curve. “The good news is you scored incredibly high and are delivering great care; the bad news is you are in  last place since everyone else scored higher, and we are only reporting that you lost.”

We also must focus on measures that matter to patients. Yes, there are physiologic quality measures, but there are also meaningful measures for patients. Lastly, we need to allow healthcare providers to go “behind the curtains” of the measuring body. If we are to improve, we must understand the review process. Transparency and allowing providers to investigate and confirm the information and scores are extremely important in the corroboration process.

Let us work together, the grader and the grade, to better design and deliver quality benchmarking and comparisons so we can better serve our communities.