The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
Without appropriate data collection tools, hospitals will find it impossible to determine whether or not quality improvement (QI) programs are making a difference, says Peg Mason, RHIT, CPHQ, director of government quality improvement programs at Iowa Foundation for Medical Care in West Des Moines, who oversees all QI projects for Medicare and Medicaid contracts in Iowa, Nebraska, and Illinois.
According to Mason, a range of mechanisms can be used to improve quality. She says the first step is to look at care processes, outcomes, and customer satisfaction as areas that offer opportunities for improvement. For example, hospitals can use aspirin or beta-blockers for patients with acute heart conditions or periodic eye and foot exams for patients with diabetes.
Mason says improved outcomes objectives can include anything from reductions in average length of stay to decreased mortality rates. Customer satisfaction objectives can range from reduced waiting time to a determination of whether physicians offer adequate explanations of the treatment plan.
Once hospitals have identified their objectives, she says, they should determine what their quality indicators are going to be. While the best approach is through the development of a multidisciplinary team, she says it also pays to involve hospital staff who are not directly involved in the process who can offer divergent perspectives.
Mason says that once a quality indicator has been selected, hospitals must determine how they are going to evaluate the process. "You may have a team of people involved in the process who will then tear it apart in each step and find areas of improvement," she explains. For example, if the mechanism is the early administration of aspirin, it may fall on the emergency department or the intensive care unit.
Mason highlights several possible interventions including education for health care professionals or consumers as well as system or process changes such as care maps or care protocols. For example, standing orders may be revised because patients with immunizations may not require a physician any longer.
"Maybe your hospital protocol says you need one, but that may be something that you can set up that nurses can administer," she says.
Hospitals also might consider some type of electronic or passive reminder system that includes medical record stickers for physician order sheets to remind them of best practices for particular conditions, says Mason. She notes that computerized systems can be prompted for certain conditions. Hospitals may want to look at employee competencies in a particular treatment area to recognize accomplishments, she adds.
Once hospitals reach the re-evaluation stage, Mason says, they must allow sufficient time to determine whether the intervention has been effective. That includes periodic monitoring and the establishment of checkpoints.
Once these steps are taken, Crystal Kallem, RHIT, manager of government quality improvement programs at the Iowa Foundation, says the first step is to determine what data elements must be collected. "To identify those data elements, you must identify the specific pieces of information that are going to be used to calculate your quality indicator rates," she says. "You should only collect the specific information necessary for the analysis of the quality indicator rates."
Kallem warns that if hospitals decide to collect additional information, it may become too cumbersome and hospitals may wind up with a lot of information that is difficult to present in analysis or evaluating quality indicators.
It is also time-consuming to collect all of that information, she adds. "You are better off using your resources to implement interventions related to your quality indicators rather than exhausting all your resources in collecting additional information that you don’t need."
Kallem says it is important to develop detailed definitions in order to assure the consistency of the data being collected. In reviewing the components of data definitions, she says hospitals must start by stating what information should be obtained from the medical record in specific detail so that hospital personnel with diverse technical knowledge can abstract that information.
She adds that detailed options for the answers that will be selected during the abstraction process should also be developed. Also, Kallem says hospitals should include the location in the medical record where that information can be obtained. For example, if information will be obtained from patient history and physical or the nurses’ or physicians’ notes, that should be specified.
All inclusion and exclusion criteria that would be used for abstracting the project information should also be included, Kallem says. That includes any synonyms or abbreviations that would be allowed within the medical record. "It is important to spell out all of that information," she warns. If necessary, it also makes sense to include attachments for abstractors to use such as a list of antibiotics or ACE inhibitors or other criteria that might have severity classifications.
In the case of her program, Kallem says that once hospitals have developed a preliminary draft of their data-collection tools, her organization typically conducts an alpha test process that uses two different abstractors to extract 10 to 25 medical records and compares the results of the abstraction from those medical records to determine if any mismatches have occurred during the abstraction process.
Once mismatches are identified, she says the two abstractors discuss whether additional detail must be added to the definitions of the data elements in the abstraction tool. She says another reason that it is helpful to conduct an alpha test is that unexpected information in the medical record may be uncovered.
After any modifications to the data collection tool have been made, Kallem says a beta test of the extraction tool is then conducted. At that point, several abstractors extract 25 to 50 medical records depending on the complexity of the data collection tool and the availability of the specific types of records that are available.
The results are then compared, and any mismatches are identified. The abstractors determine if any additional detail is required. "At this point, there should not be many modifications to the tool itself," she says. Instead, this phase should be confined to adding certain edits to the different variables. However, if there are substantial modifications such as changing the definition of a data element, Kallem says a second beta test should be conducted at this point.
Prior to distributing data-collection tools on a statewide basis to all of its providers, Kallem says her program asks several providers in each state to conduct a pilot test of their data-collection tools to assure that the tools are user-friendly for facility abstractors and to ensure the definitions are easy to understand and meaningful.
As part of the distribution process after the data collection tools are collected on a statewide basis, Kallem says her program provides extraction training for providers using several different methods to ensure that data are being collected consistently and accurately so the data can be compiled for statewide analysis.