The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
Needlestick benchmark can be safety 'snapshot'
Internal comparisons may be most helpful
Suppose needlesticks at one of your health care facilities rose this year compared to last year. That doesn't sound so good. Clearly things are not going in the right direction. But you need more information to understand what's happening. You need a benchmark for your needlesticks.
First, you need to calculate a needlestick rate. If the numbers rose but the procedures or bed count grew even higher, then your rate actually might have gone down.
You also need to dig a little deeper into the dynamic of your sharps safety program. "If you're going to assess how successful your program is, the number of injuries has to be just one component," says Angela K. Laramie, MPH, epidemiologist with the Sharps Injury Surveillance Project in the Massachusetts Department of Public Health in Boston. "Evaluate whether your injury reporting has changed in any way or whether frontline employees are getting more involved in device selection.
"If you use injuries as the sole measure of success of a program, then you risk driving reporting underground," Laramie says. "You risk that people aren't going to want to report their injury, and that's counterproductive because we miss the opportunity to provide appropriate care to injured employees and to learn more about those injuries so we can prevent them in the future."
Yet in the quest for injury reduction, rates can be useful, if they're properly evaluated. A comparison with a national database, such as the EPINet network of the International Health Care Worker Safety Center at the University of Virginia in Charlottesville can bolster a case for more resources to tackle sharps safety. Associate Director Jane Perry, MA, says, "Oftentimes, hospital administrators will want to know, 'How do we compare to other institutions around the country?'"
Numerator: Number of needlesticks
So how do you calculate your rates? What, exactly, should be in your numerator and denominator?
Clearly, your numerator will be the number of needlesticks, but you might want to look specifically at units such as med-surg, emergency department, or the intensive care unit. Your denominator will vary depending on what you are trying to measure.
Do you want to know who is sustaining the most needlesticks? You'll need to use FTEs to compare occupational groups, such as nurses, phlebotomists, and physicians. How frequently do they occur? In some units, you may be able to use the number of procedures. Which devices or device categories are associated with the most needlesticks? This is a tricky question, but you might be able to obtain purchasing data from materials management.
Those internal markers will provide information for action. "You should benchmark against your own data and really look at yourself over time," says Laramie. "It's important to take a broad assessment of what's happening hospitalwide, but I think it's more important to take a few devices and really look at where those injuries are occurring."
Public hospitals in Texas and all acute care and chronic care hospitals in Massachusetts are required to report their bloodborne pathogen exposures. These and other national databases collect information per 100 beds. For example, if you had 350 needlesticks in one year and you had an average of 800 occupied beds, your rate would be 350 divided by 800, multiplied by 100, or 44 per 100 occupied beds.
"You can use the national-or state-based data to get a sense of what is going on," says Laramie. But she cautions that you need to compare yourself to similar facilities. EPINet, for example, reports its data for teaching and nonteaching hospitals.
Perry says, "Teaching hospitals always have higher rates because they have more trainees and they're often doing more intensive procedures using more needles."
Different populations, procedures
Different patient populations might mean hospitals perform different types of procedures, so other factors ranging from geography to size may influence needlesticks, Laramie says.
It also is important not to view a national or state benchmark as a goal or best practice. It is just a snapshot of current performance. If your rate is better than the average, that doesn't necessarily mean your rate is "good" or "acceptable." Your goal should be continual improvement.
"I don't want anybody to say in the state of Massachusetts 40% of injuries happened to nurses and at our hospital it was only 30%, so we're doing really well," Laramie says. "I don't like our data to be used as a benchmark. The benchmark assumes an acceptable level or a goal. My data are not a goal. They are a picture of what exists."
After all, the goal is not to do better than most hospitals on needlesticks. The goal, notes Laramie, is zero.
In fact, a low number of needlesticks actually can be a bad thing, if the numbers are low because health care workers are reluctant or uninformed about reporting, she notes. "Get employees involved as much as you can [in sharps safety]," she says. "If you see a low number of injuries, be aware that it could say something about the underreporting in the facility."