Observational Studies May "Trouble Scientists," But They Are a Necessary Tool

ICYMI, the Wall Street Journal today ran an interesting and lengthy article, "Analytical Trend Troubles Scientists," which outlines the challenges with the design, conduct and analysis of observational studies. Observational studies "follow participants over a period of time to examine the potential associations between patients’ exposure to health treatment and health outcomes."

ICYMI, the Wall Street Journal today ran an interesting and lengthy article, "Analytical Trend Troubles Scientists," which outlines the challenges with the design, conduct and analysis of observational studies. Observational studies "follow participants over a period of time to examine the potential associations between patients’ exposure to health treatment and health outcomes. These studies can be performed prospectively, observing patients in real time, or they can be retrospective analyses of existing databases."*

There is certainly a need for observational studies, too. Although randomized controlled trials (RCTs) are viewed as the research “gold standard,” they are expensive, take a long time to design and complete, and study a carefully selected group of patients managed in a controlled environment.  RCTs are meant to answer “can it work under the best of circumstances?” while observational studies help us to understand “will it work for typical patients in routine care environments.” Both study types have relevance as they address different questions and situations.

And with the millions of dollars in private and public funds committed for comparative effectiveness research, it’s highly likely that we’ll see a plethora of observational studies in coming years. So how can we ensure that observational studies have minimal biases and are conducted as rigorously as RCTs?

To address these challenges, observational studies must be designed, conducted, and analyzed in a manner similar to RCTs. The study methods must be designed ahead of time and not based upon first looking at the data. Because study standards can and should be expected, a number of research and policy organizations are collaborating on a set of rigorous principles (known as the GRACE principles) to help guide researchers in the design, conduct and analysis of observational studies. GRACE, which is short for Good ReseArch for Comparative Effectiveness, is an "initiative to enhance the quality of observational CER and to facilitate its use for decision-making about therapeutic alternatives. The GRACE principles are endorsed by the International Society for Pharmacoepidemiology and supported by a number of professionals and organizations."**

GRACE collaborators also are developing a GRACE checklist to "provide a validated tool for the assessment of observational CER quality and usefulness for decision-making.” The checklist is "based on existing literature and guidance from experts with extensive experience in the conduct and utilization of observational CER."** The GRACE principles, along with the checklist, should help to alleviate some of the concerns associated with observational studies.

Similar collaboration is underway on a toolkit that payers can use to determine the validity and applicability of an observational study to answer the research question at hand. Payers are particularly challenged by observational studies because most of them do not have standards in place for the evaluation of this research. According to a recent study sponsored by NPC, payers consider observational studies very differently, which results in widely varying coverage options for patients. A toolkit could be a useful tool in assuring standards in formulary decision-making, which is why NPC, the Academy of Managed Care Pharmacy and the International Society for Pharmacoeconomics and Outcomes Research have undertaken this effort.***

Over time, we anticipate that the quality of the underlying data in observational studies will improve as data become clinically richer via electronic medical records and other databases. In the meantime, we cannot ignore the importance of using observational research to help us understand treatment effects in the "real world."

Sources:

* Dubois RW, Kindermann SL. Demystifying Comparative Effectiveness Research: A Case Study Learning Guide. November 2009. Page 3.

** The GRACE Principles. www.graceprinciples.org. Accessed May 3, 2012.

*** Comparative Effectiveness Research Collaborative Initiative. http://www.ispor.org/TaskForces/InterpretingORSforHCDecision
MakersTFx.asp. Accessed May 3, 2012.