The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
When 30% of patients respond to a mailed survey, group practice administrators and physicians often consider that a success. But relying on the answers of less than half of the patients surveyed may give you faulty conclusions, according to response rate research.
Patients who fill out surveys right away and send them in respond differently from those who complete the questionnaire after a reminder, says William Barkley, PhD, president of Effective Interventions, a Nashville, TN-based consulting firm that specializes in customer satisfaction measurement and improvement.
Those early responders are not necessarily the complainers. In fact, they may be more satisfied with care than those who respond later. But they are not representative of the whole sample, says Barkley, former chief operating officer of NCG Research in Nashville.
In a study of response rates at three hospitals, Barkley compared early responders (the first 30%) with all respondents (average response rate of 58%). Early responders were more positive at one hospital, more negative at another, and showed differences that followed no consistent pattern at the third.1
A group practice or hospital setting priorities for quality improvement based on response rates below 50% could find dramatic changes in quarterly surveys based on variation, not actual improvement or decline, says Barkley.
His conclusion: You’d be better off conducting no surveys at all than relying on low response rates.
"Bad data are worse than no data," Barkley says. "If they point you in the wrong direction, it’s worse than having [no information]."
Impossible, you may say. How can you get a response rate of 50% or more?
In a mailed survey, Barkley suggests a minimum of a first mailing, a reminder postcard, and a second mailing to nonrespondents. "With that, you can typically get 50% or 60% or more," he says.
In telephone surveys, Barkley makes six attempts to reach each respondent.
That may sound prohibitively expensive. But at the Leahy Hitchcock Clinic at the Dartmouth Medical School in Hanover, NH, Eugene Nelson, DSc, director of quality education measurement and research, uses small, representative samples and tracks responses over time.
For example, at one clinic site with 250 physicians, four randomly sampled patients receive surveys each week. New physician-specific patient satisfaction measures will involve sampling at least 50 patients twice a year per physician.
With repeated mailings and trained phone interviewers who make six to 10 callbacks, the Leahy Hitchcock Clinic has a response rate of 60% to 80%.
Nelson discovered the importance of response rates when he once validated a short, inpatient questionnaire for the Hospital Corp. of America. Response rates of 10% to 40% revealed more ratings of excellence than did higher response rates.
"Once you got to about 50%, it looked like you were getting trustworthy findings," says Nelson. "A good standard would be a 60% response rate."
Nelson offers the following advice about attaining and evaluating higher response rates on patient satisfaction surveys:
• Make the letter appealing.
The cover letter should have an attractive print, a real signature, and a real stamp, so it doesn’t look like junk mail.
"The letter gives people a reason why they should take part," says Nelson, such as how the questionnaire will help improve care and service.
"You follow the first questionnaire with a reminder postcard seven to 10 days after the mailing," he says. "If you haven’t heard from a person by day 20, there’s a second letter, with another copy of the questionnaire, that is briefer encouraging response and telling them why it’s important to hear from everybody."
• Use a system of sampling.
For example, you may choose to survey four patients after every 20. If you are targeting an issue that affects only certain patients, such as older women, then sample within that population.
Nelson then averages the results and plots satisfaction measures over time, with each dot representing one week or one month. "What you get is a very predictable pattern," he says. "If things start going sour, you can see it very quickly."
• Consider your goals.
"If your intent is to give everybody you see an opportunity to comment on their care, then you don’t worry about response rates," says Nelson. You survey everyone, but you should realize that the results may not be representative of everyone.
If you want to use patient responses as an outcomes measure to guide quality improvement, then you should seek high response rates, he says.
"If you get a low response rate, you’re throwing good money after bad" with misguided improvement projects, he says. "What you’re getting is misinformation."