CMS STARS ratings – It’s Time for a Change

By | June 24, 2021

In 2016, the Centers for Medicare and Medicaid Services (CMS) introduced the Overall Hospital Quality Star Rating Program to create transparency on hospitals’ quality, by summarizing dozens of metrics on the Hospital Compare website. There was considerable consternation over the validity of the data. In addition to comparing all hospitals to each other, regardless of unique dynamics, all hospitals were graded on a curve, meaning even if a hospital was doing exceedingly well, since it was a comparison, it still could receive a low score. Furthermore, these concerns heightened as research revealed methodological, statistical, and conceptual flaws in the program and scoring.

As a result, in April 2021, significant changes were released, designed to improve the validity of the data, create greater predictability, easier replicability, and allowing more meaningful comparisons amongst hospitals. One major issue that hospitals face is that they have difficulty figuring out the scoring, thereby handicapping them in their ability to improve. If one does not understand the problems, how does one learn and improve?

The critical changes released in 2021 included streamlining to a simple average of all the components rather than using a latent variable model. The measures are now equally weighted. This change allows hospitals to calculate their score and predict future performance more readily. Additionally, the inclusion of hospitals only occurs if they have enough data for a meaningful rating. Furthermore, not all hospitals are ranked together. Lastly, they are now classified into three groups, thus allowing more appropriate comparisons.

However, despite these areas of improvement, there still remains critical areas that require further evaluation for adjustment. The primary issue that remains is the concept of grading on a curve. In 2021, only 14% of hospitals received five stars. This grading method is based on a statistical model that bins hospitals into pre-specified groups. Hence, no matter how much a hospital improves its performance or score, it still may not achieve a high score. As hospitals improve, the difference in cut-off between these bins becomes so minuscule that they are clinically meaningless. It reminds me of how this worked in a class that graded on a curve. “The good news is you scored a 94 out of a 100; the bad news is you still failed as everyone else scored 95 or above.”

A superior methodology would be to set score cut-offs in a manner meaningful to care delivery and outcomes. This method would tell the public how good a hospital truly is rather than how it compares based on a curve. Additionally, if the quality is stellar at all hospitals, which is the goal, all hospitals will receive a high star rating. Another item requiring consideration is the auditing of the data submitted. Many of the measures are collected by abstraction from charts. If the consistency of interpretation and the resources allocated to collecting such data differs amongst hospitals, the results will vary. There needs to be consistency and auditing to ensure equality in scoring. Transparency of quality is vital for the end-user or consumer. Truthfully, we owe it to them to verify what they view is correct, accurate, and meaningful to their care.