The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
By Patrice Spath, RHIT
Forest Grove, OR
The Joint Commission on Accreditation of Healthcare Organizations’ ORYX project is impacting the way hospital caregivers evaluate performance. Ten years ago, very few data from external groups could be used for comparative purposes. Today, with all the different report card initiatives, such data are easier to find. Now quality managers are facing the challenge of sharing these data with administrative and medical staff leaders in a way that allows for accurate evaluation. Data presentation and analysis can be more effective when several preparatory steps are followed:
• Verify the accuracy of the data. Compare what you know happened to the data in the comparative report. Verify the numbers from your database or other input documents against the values in the reports. If you find data quality problems, be careful about presenting the data to administration and physicians. It’s best to wait until the data are corrected, if possible.
• If you are satisfied the data are accurate, look for significant variations (two standard deviations from the mean) at the global performance level. For example, is your facility cesarean rate higher or lower than the mean for peer facilities in the state? Is your overall pulmonary or cardiac mortality rate higher or lower?
• Look for significant variations at the DRG level. For example, is the vaginal delivery after cesarean (VBAC) rate at your facility higher or lower than the mean for peer facilities in the state? Is your facility’s acute myocardial infarction death rate higher or lower?
• Focus on areas of interest to your organization. Although length of stay may not be a core measure for the Joint Commission, administrative and medical staff leaders are likely to be very interested in these statistics. How do your lengths of stay for various DRGs compare to hospitals in your state?
• If no significant variations are found by comparing your performance to other facilities in your state, look at how you fare on the national level at the global and DRG level. If no statistically significant variations are discovered on state and national comparisons, you’ve got some "Don’t we feel good about ourselves!" data. Share the information with administrative and physician leaders and other relevant groups. Remember, however, that every process can be improved. Even if the comparisons don’t show statistically significant variations, your organization may choose to work on improving performance in select DRGs or patient categories.
If statistically significant variations are discovered, then you need to get people’s attention. Present the information in a way that clearly shows where variation exists and the extent of the variation.
People in your organization can react to significant variation on comparative reports in many different ways. Even if the variation is statistically significant, they may choose to ignore it. Even the most competent quality manager with the best data presentation can’t ensure that people will dig deeper into the cause of variations. It is important, however, to remind facility leaders that the Joint Commission expects an in-depth analysis be conducted when it is found that the facility’s performance varies significantly from peers.
It’s common for people to challenge the data validity when faced with unfavorable performance measurement results. If the data are proven to be accurate, then people may react by saying, "We look different because our patients are sicker!" Be prepared to respond with information about the severity-adjustment mechanisms used to risk-adjust the comparative data. If the comparison data are derived only from claims data, the risk-adjustment system will not account for every factor that impacts patient outcomes.
The outcomes of care depend on a complex combination of:
It is impossible to control for all risk factors. But knowing what risk factors have been accounted for in the data comparisons can help your organization interpret comparisons of outcomes across hospitals. It may be necessary to provide data that allow the clinicians to look at other dimensions of risk that are not adequately addressed in the comparative database, e.g. age, sex, race, and ethnicity (demographics); acute clinical stability; physical functional status; cognitive and psychosocial functioning; cultural and socioeconomic attributes; or patient attitudes and preferences for outcomes.
People may react to unfavorable comparative data with the response, "So what if we look different? . . . The numbers are too small to be meaningful!" At the DRG or practitioner level, you are likely to be working with small numbers. When the N is less than 30, the data may not be statistically significant, although the variation is real. In this instance, you have three choices:
The Fisher’s exact test is useful in situations where the expected frequency of an occurrence is very small (> 5). The t-test can be used to compare the arithmetic means of two small sample populations to determine whether they are identical or different. "So what if we look different? . . . It’s not affecting the overall quality of patient care," is another response often heard when people are faced with unfavorable performance results. Scatter diagrams are useful data display tools for responding to this reaction. A scatter diagram is a graphical technique used to analyze the relationship between two variables.
Two sets of data are plotted on a graph, with the Y-axis being used for the variable to be predicted and the X-axis being used for the variable to make the prediction. The graph is useful for illustrating the relationship between one variable and another and for showing the interrelationship of causes. For example, a scatter diagram could be used to show the relationship between decreased VBAC rates and patient dissatisfaction with their level of participation in treatment decisions. Additional information such as this can substantiate quality concerns that might otherwise not be evident in the comparative reports.
The quality manager cannot force people to discover the cause of performance variations found in comparative reports. However, by anticipating how people will react to the information, the quality manager can come to the meetings prepared with responses.
The questions on the worksheet (see box, below) can help you predict issues that might be brought up during data presentations. Adequate premeeting preparation by the quality manager can go a long way toward increasing the chances that people will want to dig into the process to discover why performance is unfavorable.
|Quality Manager’s Comparative Data Analysis Worksheet|
|As I reviewed the comparative data, I identified the following issues that need further investigation or appear to be improvement opportunities:||What questions might be asked when I present these data to physicians, nurses and other groups?||What information do I need to answer these questions?|