Two years ago this month, the National Pharmaceutical Council sponsored a Health Affairs issue on comparative effectiveness research, or CER, and its role in the health care system.
At that time, CER was still a fairly new concept within the US health policy arena. In October of 2010, the Affordable Care Act had only been in effect for a few months, and the Patient-Centered Outcomes Research Institute, or PCORI’s, Board of Governors had just been appointed. There were clearly more questions than answers when it came to CER.
Fast forward two years to today. Some of the initial foundational questions around how PCORI would operate have been answered. Research is being funded and progress has been made around establishing methods for the conduct of research. But many questions still remain.
At the National Pharmaceutical Council, we are focused on these outstanding questions and key issues. NPC has a significant portfolio of research looking into some of the intended and unintended consequences of CER. These include issues like CER study design and conduct, how results will be communicated and disseminated, and its impact on innovation and on individual patients.
For example, how will individual treatment effects and individual patient response be considered in CER studies? That’s a key unanswered question that we are closely tracking. It also is the topic of a conference we are hosting with the National Health Council and WellPoint on November 30 to take a look at this challenge. We’ve titled it “The Myth of Average: Why Individual Patient Differences Matter.”
Another challenge in CER concerns the use of real-world evidence. And by real-world evidence, I’m talking about evidence that we see after a drug is on the market; evidence that is pulled from databases; evidence that is based on observational studies, not the randomized controlled studies that are required by the FDA for drug approval. Most of the billions of dollars in CER that will be developed in the near future will be based on observational or “real world” studies, not RCTs.
That’s why NPC is working with other organizations on the development of high-quality standards and on tools to help payers and other decision makers evaluate and utilize observational research.
Finally, in the current health environment, there are significant differences in how various stakeholders can communicate research data. We are looking at upwards of $500 million annually being spent on new CER research. The research will be conducted by a broad spectrum of stakeholders: academia, government institutions like NIH, AHRQ and PCORI, as well as payers and industry.
That leads us to today’s conversation. With all of that new research being generated, what are the rules of the road when it comes to communicating about that research? The pharmaceutical, medical device and diagnostics industries are highly regulated when it comes to communicating evidence to providers and patients, while other groups do not have the same limitations on what they can say and to whom. It’s this last piece that we hope to shed some light on today.
The asymmetry in the ability to communicate by those who generate evidence raises a significant number of questions.
- What happens when research findings are released?
- Who can respond, and how?
- Are there regulatory or policy solutions to ensure a flow of information?
- And is it possible to change the current situation in a way that encourages innovation and discovery, so that we can achieve our goal of better outcomes for patients and improved value?
After all, the purpose of conducting research and communicating this information is to help doctors and patients make appropriate decisions and arrive at better outcomes. That’s why it’s so critical to get it right.
View additional videos and resources from the Health Affairs briefing.