Filling Gaps in Evidence: A Tale of Two (Retrospective) Studies

In his latest column in the Journal of Comparative Effectiveness Research, NPC Chief Science Officer Robert W. Dubois, MD, PhD, poses critical questions on the types of studies used to inform comparative effectiveness research (CER). As evidence-based medicine (EBM) matures and becomes increasingly ingrained in health care delivery, can randomized controlled trials (RCTs) satisfy the need for evidence and, if not, what are the viable alternatives?

In his latest column in the Journal of Comparative Effectiveness Research, NPC Chief Science Officer Robert W. Dubois, MD, PhD, poses critical questions on the types of studies used to inform comparative effectiveness research (CER). As evidence-based medicine (EBM) matures and becomes increasingly ingrained in health care delivery, can randomized controlled trials (RCTs) satisfy the need for evidence and, if not, what are the viable alternatives?

An examination of guidelines and assessments in the EBM landscape reveals a persistent gap in the availability of data from RCTs. In some cases, “further study needed” or “level C recommendation” serve as a caveat to almost half of the available guidelines, leaving an EBM infrastructure that is built solely on RCT-level data vulnerable to failure, Dr. Dubois says. He suggests stakeholders might look to retrospective observational studies to help address gaps in evidence and generate the insights needed for clinical decision-making.

Retrospective observational studies, which are increasingly being put to work within Rapid Learning Networks, are real-world studies that draw on electronic health records or other large, existing sources of information to assess treatment effectiveness and patient outcomes. RCTs, while precise and authoritative when designed and conducted well, take years to design and conduct, and are costly. Retrospective observational studies, on the other hand, are significantly less costly, and can be designed, executed and completed quickly. Because they rely on real-world data, these studies can provide insights on factors that RCTs cannot, such as treatment variations within special populations or the impact of treatment options that cannot be ethically randomized. 

But how useful is this retrospective data for making clinical decisions? To explore whether these studies can provide credible evidence for decision-making or whether they simply yield another hypothesis for “further study,” Dr. Dubois examined two retrospective observational studies.

  • Lung disease: Previous randomized trials compared two treatments for chronic obstructive pulmonary disease (COPD): a single medication and a combination therapy. The RCTs, which were conducted in younger patients without major comorbidities and looked at short-term outcomes, found no difference in mortality rates between the two treatments. A retrospective study published in the Journal of the American Medical Association (JAMA), analyzed multiple data sources, included older, more complex patients, and examined outcomes over a longer timeframe. The study showed a significantly lower mortality rate in patients on the combination therapy and provided useful clarity on the best treatment choice for older patients with comorbidities.
  • Breast cancer: A retrospective study also published in JAMA compared breast cancer survival rates among women who had bilateral mastectomy, unilateral mastectomy and breast-conserving surgery with radiation therapy. Strong treatment preferences make randomizing patients unfeasible, and no prior RCT had compared these three treatment options. Using data from the California Cancer Registry, researchers found similar survival rates between breast-conserving surgery with radiation therapy and bilateral mastectomy—and survival rates in both groups were significantly higher than women treated with unilateral mastectomy.

In each study, researchers analyzed factors that would have been difficult or impossible to assess with an RCT—and at much lower cost. Studies like this may be useful for clinical decision-making in cases where the time or resources required for an RCT are challenging. EBM, says Dr. Dubois, can be informed by more types of data than RCTs alone, but pragmatic questions will need to be navigated in assigning value to retrospective studies within the clinical decision-making environment. Can we wait eight years for an RCT to show results, or do clinicians and patients benefit more from near-term findings? Can we afford a $50 million RCT to answer each pressing clinical question within the healthcare delivery landscape, or could a series of $500,000 retrospective observational studies, that show similar results, provide high-confidence evidence? Does the pristine design of an RCT outweigh the ability to provide insights on specific populations like older adults, different racial and ethnic groups or populations with medical complexity?

Stakeholders believe CER's impact will be felt on clinical decision-making in the next five years, according to a recent NPC survey, but could we see the promise of CER fulfilled more quickly by expanding perspectives on what constitutes good evidence? Dr. Dubois suggests that now is the time to create a framework for determining when non-randomized data is of sufficient quality to influence decision-making and under what circumstances randomized trials are truly needed before patients and providers can benefit from the findings.