The most award winning
healthcare information source.
TRUSTED FOR FOUR DECADES.
Data monitoring can affect risk management
It also may help manage risk
The age of information and the way 21st century computers can make it easy to collect and sort through data are important advantages in managing risk during the clinical trial process.
New technology also can help highlight and separate good data from poor data, even as a clinical trial is under way, says Christopher Gallen, MD, PhD, vice president and chief of operations for clinical research and development at Wyeth in Collegeville, PA.
Through taking advantage of existing technology, a clinical trial sponsor is able to check incoming data during a clinical trial and monitor the trial for outliers and high-risk findings, he explains. "Then we can contact the clinical trial staff to make certain that problems that look like a medical concern are addressed," Gallen says.
Monitoring a trial’s data also is a good way to make certain that a trial is not being poorly conducted and therefore collecting bad data that would show the trial to be a failure when it really is a success, he notes.
Gallen highlights some other strategies developed at Wyeth that are designed to help improve the clinical trials process and manage risk:
• Design a better inclusion-exclusion criteria. The idea is to develop inclusion-exclusion criteria based on what physicians say they see in their patient populations, Gallen says.
"We’re trying to work with clinicians who see a lot of patients with a given disease and have them look at the exclusion/inclusion of trials and compare these to people who come into their office," he explains. "We have the clinicians give us a report that tells us of the 100 people they saw in a two-week period for a given disorder."
The reports will outline how many of these patients would be excluded by a set of potential criteria and whether the patients indicated any interest in participating in such a trial, Gallen adds.
This way, if it appears that a large percentage of potential subjects would be eliminated by one criteria, then that criteria could be modified or eliminated before the trial begins to enroll subjects, he notes.
"This is the way we can give our drug the maximum chance of having a successful trial," Gallen says.
• Create a strategy to reduce risk of trial failure. "The worst trial outcome is when you can’t tell from the data whether a drug succeeded or failed," Gallen says.
Now with new data management technology, it’s possible to reduce the risk of this outcome by conducting a statistical quality review of the data, he says.
This is how it works: A study is designed to look at patient outcomes from a couple of different measures, including a physician’s subjective rating of whether the patient has improved or gotten worse and an objective tool that uses various criteria to measure the same thing, Gallen explains.
If both the physician and the rating tool show the same results, then it would appear that the trial is working. But if one measure found one result and the other found a different result, then this would be a problem, he says.
"If you plot out physicians’ ratings and the tool’s ratings, then you should see some constant relationship between the two, with some variation," Gallen says. "But if you do such a scatter plot, you also will see that in some studies at some centers there are patients that the clinician thinks are getting much better, but the rating scale is not showing it."
This shows that someone is making a mistake and it’s either the physician or the person administering the rating tool, he notes.
"Most of the time, you wouldn’t find out about this problem until the trial is over, but we’re looking at a way to automatically detect those kinds of discrepancies early in the trial," Gallen says.
Once the problem is discovered, the investigator and clinical staff can be trained to improve the rating process and new centers beginning the trial can be given improved instructions on using the rating systems, he explains.
• Produce better dosing ranges for a new trial. Another strategy to improve clinical trials could be to use a process called adaptive randomization, Gallen says.
If researchers plan a dosing range trial, then the subjects enrolled in such a study could be spared participation in a study arm that produces no useful data.
For example, suppose that early on in a study a statistical analysis shows that one of three or four arms in a dosing range investigation is making subjects worse than a placebo, Gallen says. And if the analysis shows that this dose is of no use to these subjects and there is no way that it will improve enough to be considered helpful no matter how many additional people are placed on this dose, then it would be a good idea to drop this arm well before the study concludes.
"Adaptive randomization is a computerized, blinded way to look at incoming results on a trial," he says. "If a study arm has no chance of success, then you should close the arm and enroll the subjects in other arms."
This is a more efficient use of subjects’ and investigators’ time.
"So the idea of adaptive randomization is that it’s a way of testing each arm to the point where you know that it doesn’t work or you know that it does work, and it gives the arm the maximum rigorousness of a test," Gallen says.