Next week, the Institute of Medicine (IOM), in conjunction with the Patient-Centered Outcomes Research Institute (PCORI), will be holding a two-day workshop on the use of observational studies in a learning health system, with a focus on the complementary roles of traditional trial evidence from randomized controlled trials and observational studies in helping to inform clinical and health policy decisions.
But what are observational studies, and why do they matter? Observational studies are a component of real-world evidence, which in very basic terms means, "How does this treatment work in the everyday life of a patient like me?" More specifically, observational studies follow participants over a period of time to examine the potential associations between patients’ exposure to treatments and health outcomes. These studies can be performed prospectively, wherein data are collected about a group of patients going forward in time; or retrospectively, in which the researcher looks into the past, mining existing databases for data that have already been collected. These latter studies are frequently performed by using an electronic database that contains, for example, administrative, billing or claims data. Observational research can also harness the power of electronic health records, which have greater clinical information that more closely resembles the data collected in a randomized controlled trial. Observational studies can also provide information on how treatments work in "real world" environments, which allow researchers to collect data for a wide array of outcomes, settings, or types of patients.*
So what’s the fuss about observational studies? In observational studies, patients are not randomized to treatment groups, so some patient characteristics might not be evenly distributed between the two groups. This could result in bias and confounding (when a characteristic more likely to be seen in one group of study participants than another is related to the outcome of interest and may confuse the results).** For example, doctors may prescribe a new medication only for the sickest patients. Comparing these outcomes (without appropriate adjustment) with those from less ill patients receiving alternative treatment may lead to misleading results. Observational studies also can identify important associations but cannot prove cause and effect. Another potential problem is data dredging--the concept of overworking a particular data set in the hope of finding more information, even if those are statistical aberrations. Unfortunately, the growing need for comparative effectiveness research (CER) and the wide availability of administrative databases may lead to the selection of research of poor quality with inaccurate findings.***
Even with those drawbacks, there are still many benefits to conducting observational studies. The study design can provide a unique glimpse of the use of a health care intervention in the “real world,” an essential step in gauging the gap between efficacy (can a treatment work in a controlled setting?) and effectiveness (does the treatment work in a real-life situation?). They also can be used in situations where it is not possible or ethical to study treatment effects in a large population. Furthermore, observational studies can be conducted quickly and at lower cost, particularly if they involve the secondary analysis of existing data sources. Numerous examples of the use of observational studies exist to inform treatment in patients not typically studied in clinical trials due to strict inclusion and exclusion criteria (i.e., who can and cannot be included in a trial); to measure longer term outcomes; to address meaningful outcomes such as adherence and patient preferences; and to examine situations in which usual care may differ from that in clinical trials.****
During the next week, we’ll take a closer look at related research issues and highlight some of the ongoing challenges and potential solutions to ensure that both randomized controlled trials and observational studies can provide better evidence to patients, providers, health systems, and policy makers. These and other topics will be discussed at the IOM meeting on April 25-26 in Washington, DC.
Content adapted from Demystifying Comparative Effectiveness Research: A Case Study Learning Guide (*, *** and ****) and Making Informed Decisions: Assessing the Strengths and Weaknesses of Study Designs and Analytic Methods for Comparative Effectiveness Research (**).