The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
Every few months, another big headline is splashed across the mainstream media, touting the top 100 hospitals, or the best cancer doctors, or your city’s number-one neurologist. But what is missing from these stories is the fact that often, a facility or physician can rank in different places on different lists — fabulous on one, middling on another.
Why? Because they don’t use the same metrics or data sets. Even when they do, the outcomes may still differ.
One famous study in Health Affairs in 2008 compared five websites that reported local hospital rankings of five diagnoses.1 "The sites assessed different measures of structure, process, and outcomes and did not use consistent patient definitions or reporting periods," the report noted. "Consequently, they failed to agree on hospital rankings within any diagnosis, even when using the same metric (such as mortality). In their current state, rating services appear likely to confuse, rather than inform, consumers."
Certainly what is out there isn’t being used, or at least isn’t being used to change minds. One study, released online in March, found that the Centers for Medicare & Medicaid Services’ (CMS) own comparative data on Hospital Compare didn’t have robust enough data to show enough difference to sway consumers’ opinion of hospitals.2
And a Cochrane review of multiple public quality reporting programs in 2011 found no evidence that these programs changed consumer behavior, improved care, or changed provider behavior3 — all three things which proponents of these programs say they think would be key benefits of giving the public access to key quality metrics.
It is not that the wider healthcare community doesn’t think that transparency is a good idea, says Jennifer Faerberg, MHSA, the director of clinical transformation at the Association of American Medical Colleges (AAMC). The issue is whether what is out there is valid, and whether it meets the needs of the public by educating them, or whether it just makes things more muddled by presenting incomplete, irrelevant, or incorrect information.
"There has been a lot of variability and confusion across different reports," she says. As a result, members were concerned, so the AAMC pulled together an expert panel to come up with a framework by which the public and providers and even hospital board members can evaluate the various report cards, lists, and grading sites out there.
They came up with three domain areas — the purpose of the report, the transparency of the process by which the data were compiled, and the validity of the measures used. Under each of those domains is a list of qualities to be met.
Start with the purpose of the report. If it is supposed to be about patient safety, then all of the metrics used should relate closely to patient safety or be an accepted proxy. If a measure is something just tangentially linked to patient safety and not supported by evidence, then the measure selection is not supporting the stated purpose, Faerberg says. The purpose of the report should shine through in every metric chosen.
In the measure area, the AAMC principles state that all measures should be endorsed by the National Quality Forum (NQF) or an equally respected body. "This allows for rigorous review of the measure and methodology," she says. "Without that, you can’t be sure that it is an appropriate measure for quality. If a site is using NQF-endorsed measures, it says something. If it isn’t, that is a red flag that says this measure may not be a good indicator of quality of care."
Transparency is also important. An organization providing a list of the best hospitals should be willing to share the documentation of how they came up with their scores, Faerberg notes. Being able to replicate scores is vital to scientific validity. If you want the physicians — scientists all — to trust what you say about them, you have to be willing to let your data stand the test of scientific rigor.
Risk adjustment falls under the transparency and validity domain. "What is in place now is pretty minimal for what is available for outcomes-based measures," she says. "The NQF has a report out for comment now that includes changes to guidelines on risk adjustment that will include socio-demographic factors. Getting a robust and transparent risk adjustment is critical because each patient is unique, with their own set of characteristics."
"These principles represent the ideal," she says. "No one is there yet. But healthcare is very complex, and to distill it down to a letter grade or numeric rating is very difficult. It doesn’t present a full picture and by its very nature must limit the information it can convey. This tool can help users to evaluate what they are seeing on those sites."
In addition, Faerberg says she hopes it will lead to engagement with those who create the lists and reports. Some have already been in contact with the AAMC and have expressed a willingness to make changes along the lines suggested by the expert panel — something she finds very gratifying.
"We support the idea of transparency," Faerberg says. "The issue we have had is that there is no standardization. We want to be sure that sites are using the valid and tested metrics, so that you won’t have a situation where you look on one site and you’re wonderful and another and you’re terrible."
Every question answered seems to spawn two new ones. What Faerberg does know is that the destination is somewhere far down the road. And she thinks there will be other changes in the next few years. Some of them will be spurred by the AAMC Guiding Principles.
"I feel encouraged that some of the reporters of this information are willing to engage with stakeholders, including providers," she says. "I don’t know what a report will look like in five years. But there is a willingness by some to get us all to a better place."
For more information on this topic, contact Jennifer Faerberg, MHSA, Director of Clinical Transformation, Association of American Medical Colleges, Washington, DC. Email: firstname.lastname@example.org.