Until recently, Chicago’s Rush University Medical Center boasted the maximum of 5 stars in the Centers for Medicare and Medicaid Services (CMS) hospital rating system. Data used to compute the July 2018 rating indicated that the hospital had improved in many areas, so it came as a shock when hospital administrators got a preview of the new ratings and Rush had dropped to 3 stars, according to Chicago Booth’s Dan Adelman.
Even a hospital that improves in every single metric can experience a rating drop, says Adelman. This indicates a problem with the current CMS system—and he suggests a way to address it.
The CMS rating system organizes hundreds of hospital metrics into seven categories: mortality, safety of care, readmission, patient experience, effectiveness of care, timeliness of care, and efficient use of medical imaging. It then uses what statisticians call a latent variable model, which gives weight to metrics that are statistically correlated but not necessarily indicative of a hospital’s performance.
The latent variable model assumes that in each category, there is a single, unknowable factor driving performance measures. If a few metrics in a category are correlated, the model assumes that they are driven by the latent variable and thereby gives them more weight when computing the hospital’s score.
For example, a hospital’s Patient Safety and Adverse Events Composite, known as PSI-90, takes into account a number of factors including hospital mistakes, patient falls, and infection rates. Until recently, the PSI-90 had been given the most weight in performance measures, but thanks to stronger correlations in the data used to calculate the July 2018 ratings, a new factor was given more weight: complications from knee and hip surgeries.
The problem is that these surgeries affect far fewer patients and might not be applicable to all hospitals, yet the knee and hip surgeries became a big factor by which all hospital systems were rated.
Hospitals view their individual rating before the CMS releases the information to the public, but hospital uproar over the new results caused the CMS to delay their publication until February. It also modified the ratings, so that PSI-90 now dominates again. Rush was bumped from 3 stars to 4.
Adelman argues that ratings shifts from small changes in correlations result in “knife-edge” instability that renders the evaluation system meaningless for patients who might rely on it when choosing a facility for their care. Hospitals, which use the ratings to negotiate with insurance companies for payments, cannot determine where to focus efforts toward improving. The ratings also affect a hospital’s reputation, which in turn affects patient volume and payor mix (an industry term that refers to the distribution of more-profitable patients, who use private insurance, and less-profitable ones, on public insurance). And when patients are attracted to hospitals that rate higher but have worse outcomes, that hurts the overall health of people in an area.