The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
Expert advice to prevent regulatory audit findings and improve compliance
Use checklists and audit tools to assist with QA efforts
Clinical trial research teams can run into regulatory trouble when there are too few checks and balances established to catch the mistakes and omissions.
So one way to ensure a research site is complying with regulations and policies is through research tools, checklists, and templates, experts say.
Another strategy is to examine the Food and Drug Administration's (FDA's) research audit findings, which often will mirror an institution's own experiences.
"We took the FDA findings in 2004 and compared it to what we found in our institution, and we found they run mostly parallel," says Susan Torok-Rood, MSJ, BSN, CCRP, a senior compliance analyst with the University of Medicine and Dentistry in New Jersey (UMDNJ) in Newark, NJ.
For example, studies sometimes are not conducted according to the IRB-approved protocol, Torok-Rood notes.
"That is a problem, and it ends up a compliance issue," she says. "This is research without IRB approval."
Other common FDA findings are trials that have incomplete or inconsistent documentation, problems with informed consent, and under-reporting of protocol deviations or adverse events, Torok-Rood says.
"Another common mistake is inappropriate delegation of study-related activities, including tasks or procedures," she says.
There might be a nurse who is performing a physical assessment when it's something the protocol states that the principal investigator should be doing, says Tracie Witte-Saunders, RN, MS, CCRC, director of nursing at the New Jersey Medical School (NJMS) University Hospital Cancer Center in Newark. Witte-Saunders and Torok-Rood were scheduled speakers on the topic of common audit findings at the Association of Clinical Research Professionals' Global Conference and Exhibition, held April 20-24, 2007, in Seattle, WA.
"Or a secretary might take trial subjects' blood pressure, which would be inappropriate delegation of tasks on a study," Torok-Rood says.
Research professionals sometimes forget that the IRB application asks investigators to list all of the people on the protocol who will be doing procedures, Witte-Saunders says.
Then the IRB reviews that list of names for credentials that would qualify the person to perform that procedure, Torok-Rood adds.
"At this site, the IRB requires each person working on the protocol to have some kind of IRB training," Witte-Saunders says. "We have a Web site that provides training for investigators and staff on how to do research, and they have to complete the Web site program before they can be listed on an IRB application."
When sites have problems with research without IRB approval, it's usually something subtle, Torok-Rood notes.
Perhaps the principal investigator decided to change a procedure in the study by adding an EKG or MRI procedure to the study, she says.
"They think of this as clinically necessary, so it's not a big deal," Torok-Rood explains. "But it changes the design of the study, and it may impact patient safety in a way that the IRB or any regulatory authority might not accept."
So any change, no matter how seemingly innocuous, requires IRB approval.
This is difficult for physician investigators to grasp because, as clinicians, they make decisions and changes many times in a day that are based on their own judgment, but in research there is a higher standard and so these changes need oversight, Torok-Rood adds.
"The problem with even adding a study procedure is that the design of the study may impact participant safety, and that's what the IRB is all about," she says. "So if you add procedures or subtract procedures they need to know ahead of time, so they can evaluate the safety protocol, risk-benefit ratio."
When investigators forget to inform the IRB of these changes, it's up to the study coordinator to recognize the change and have these submitted to the IRB.
Also, deviations from the protocol, however slight, must be documented and reported.
For instance, an investigator might decide that a patient with hemoglobin of 9.9 can be enrolled in a protocol when the eligibility criteria say the minimum is hemoglobin of 10, Witte-Saunders notes. "This could impact the patient and study," she says.
When these deviations from the protocol are discovered, they should be logged and explained. For example, a typical deviations/violations log could be designed, with columns for the following:
The deviations/violations log assists clinical research professionals in assessing whether a specific incident needs to be reported, Torok-Rood says.
"You can sort it out based on this tool, which is basically an Excel spreadsheet," she adds.
Likewise, the adverse event tracking tool can be created on an Excel spreadsheet with columns for causality and other pertinent lab, physical exam data.
"Instead of having the adverse events in the progress notes, have the front of the chart with [the tool listing] each of these events, including when the adverse event happened and the worst grade," Torok-Rood says.
"This helps to capture the causality of the event," Witte-Saunders says. "Patients might be taking five different drugs on a study, and it's the doctor's responsibility to say it could possibly be, or probably not at all related, to the study drug."
It's the study nurse or research coordinator's role to make certain all the documentation is consistent, and the adverse event tracking form makes this easier, she adds.
Other tools also will assist clinical trial sites in improving compliance, Witte-Saunders says.
"We've developed different tools over time, so that when a protocol is submitted to the IRB, and we get approval, there are a lot of things we do behind the scenes," Witte-Saunders explains. "So, as soon as we get the IRB approval, we should be able to enroll patients the next day."
Among the tools is a clinical research organization pre-entry checklist, which includes a "yes" or "no" checklist for the complete initial work-up items, including these examples:
Another important tool is the eligibility checklist, which clearly shows the inclusion and exclusion criteria, with each item in its own row.
For instance, under inclusion criteria there could be these detailed items, followed by "yes" and "no" responses, which are included in an example of an eligibility checklist used for educational purposes by Witte-Saunders:
The beauty of using tools is how it makes the process consistent and accountable.
"We can have multiple hands on one patient's chart, but when everyone is using the same tool, the information is being captured consistently," Witte-Saunders says. "When you are taking care of patients, you are telling a story, and the story is only as good as the details you put in there."
Stories with gaps may result in questions or an audit by the FDA, she adds.
One of the best ways to ensure a site's documentation has no gaps and will meet regulatory standards is through the use of a clinical research audit checklist. This could be a tool with several pages and detailed categories and items in which the site coordinator checks "yes," "no," or "not applicable." Also there could be a column for comments. (See sample page from NJMS-UH Cancer Center's Clinical Research Audit Checklist.)
For example, the audit checklist used by the NJMS University Hospital Cancer Center includes these six audit items under the category for "Informed Consent:"
With the audit checklist, CR coordinators can conduct their own quality assurance process on the study and make certain that all of the time points have been completed, Witte-Saunders suggests.
The checklists should be seen as templates that are changed as needed.
"My staff modify the templates to each protocol," Witte-Saunders says.
With the proper tools, clinical research coordinators can handle every study more easily and more efficiently, Torok-Rood says.