logo

Use Cases

right-logo
Comparative Effectiveness Research

Large Randomised Controlled Trials (RCT) and meta-analyses of RCTs are currently considered to be the Gold Standard form of evidence that underpins Evidence Based Practice. The reasons for this are well rehearsed, primarily that they can determine causal relationships while reducing confounders and bias (Sibbald and Roland 1998).

 

The limitations of RCTs are also well understood. They are expensive and time consuming and it can be unethical to expose patients to treatments believed to be ineffective or even harmful (Sibbald and Roland 1998). In addition, RCTs are not well adapted to the complexity of health conditions and the heterogeneity of patient characteristics. They generally have strict inclusion and exclusion criteria, meaning that their results may not apply to the diverse patients, suffering from multiple co-morbidities, who are most often encountered in the real world (Wallace 2015Black 1996, Rothwell 2005, Johnston, Rootenberg et al. 2006, Sanson-Fisher, Bonevski et al. 2007)

 

It would be infeasible to conduct RCTs to address all of the potential treatments for all variations of all conditions in all of the different patient groups, with their genetic and environmental predispositions and their multiple comorbidities. The result is that there remain huge gaps in the evidence base. Often, clinicians do not have the evidence to recommend one treatment or another. Ultimately, decisions are made on the basis of limited personal experience. Effectively, these decisions represent millions of “n of 1” experiments, taking place globally, every day (Wallace 2015).

 

Currently, learning from these experiments is limited to the clinicians involved. The advent of the EHR offers the potential for this practice-based evidence to be recorded and used in a more systematic way.


A Green Button

It has been proposed that a function could be developed, within the EHR, to allow clinicians to leverage aggregate patient data for decision making at the point of care (Longhurst, Harrington et al. 2014).

 

In this system, if the clinician did not have evidence on the relative effectiveness of different treatments for a particular patient and had no guideline to follow, they would click a “Green ‘patients like mine’ Button”. The EHR would identify all patients with similar characteristics (genetic, comorbidities, age, etc.), who have previously had that particular condition. It would identify the treatments that they have received and the outcomes achieved. It would then suggest the optimal treatment for that particular patient, taking account of their preferences.  This approach has already been implemented on a small scale, by manually extracting and aggregating EHR data (Frankovich, Longhurst et al. 2011).

 

The IBM data driven analytics team have supported the EuResist project (www.euresist.org) that examines data relating to patients who are suffering from HIV and are treated with anti-retrovirals in Europe.  Data is gathered into a database containing information on phenotype, genotype of the viral strain, treatment given and outcomes achieved.  This information can be used, in a semi-automated way, to assess which treatment might work best for a new patient.  In the case of EuResist, the clinicians were around 66% effective in choosing the best treatment on the first attempt.  By comparison, the system was reported to be around 78% effective (Foley and Fairmichael 2015).

 

If these approaches could be automated and expanded to a broad range of conditions, they would be a key use case of the Learning Healthcare System (Friedman 2015). This capability is cited at the outer range of the 10-Year Vision to Achieve an Interoperable Health IT Infrastructure, published by the US Office of the National Coordinator for Health Information Technology (ONC 2014). Many observers foresee major obstacles to such a system:
• No other patient is truly like me (Friedman 2015)
• More robust outcomes measurement would be required (Bates 2015)
• The answers would not be clear-cut. Clinicians would have to interpret the results carefully to avoid getting the wrong answer (Brown 2015)
• There are ethical issues related to this use of EHR data (Longhurst, Harrington et al. 2014)


Observational Research

Observational research, including, cohort, cross sectional and case-control studies have long been used in situations where RCTs are too expensive, unethical or when sufficient participants cannot be recruited (Mann 2003). They are observational because the participants are not randomised or pre-assigned to an exposure. The choice of treatments is up to patients and their physicians (Berger, Dreyer et al. 2012). Increasingly, these studies are being viewed as complimentary to, rather than inferior to, RCTs. Under the correct conditions, they can even provide evidence of causal relationships (Greenfield and Platt 2012).

 

It is now possible to conduct observational research, using routinely collected patient data, that would not previously have been possible (Platt 2015). Currently in US, it is possible to examine certain outcomes with high confidence, as they are captured uniformly across multiple systems.  Acute myocardial infarction or hip fracture requiring surgical repair are examples. There is good evidence that these events are captured and the data is sensitive and highly specific. It is then possible to associate these outcomes with various types of exposures or treatments (Platt 2015).

 

In the UK, this type of research has been undertaken using secondary care HES and SUS data, which is sometimes augmented by additional coding at the provider (Morrow 2015). In primary care, large databases such a QRESEARCH provide similar functionality.

 

According to Dr Wallace at Optum Labs, who are already conducting research on a database containing 150 million patient records, research has reached an inflection point. He notes that in 20th century medicine, a great deal of the cost of clinical trials was associated with data collection. Observational studies are so much cheaper that hundreds can be conducted for the price of one RCT (Wallace 2015). Dr Wallace places this in historical terms:

 

“Research is changing from a hunter/gatherer mode, where huge amounts of effort is invested to associate data with rare events, to a harvest mode in which huge amounts of data are used more efficiently to give insight.” (Wallace 2015)

 

The content and quality of the underlying data is currently a limiting factor in the usefulness of comparative effectiveness research using routine data. Rigorous recording of outcomes could allow a step change in this kind of research (Dunbar-Rees 2015). For example, the ICHOM Low Back Pain Standard Set (See Outcome Measurement) would provide an effective set of outcome and case-mix indicators to study the comparative effectiveness of instrumented versus non-instrumented fusion for spondylolisthesis. This would also require the recording of important contextual information. In this example, it would be necessary to clarify what is meant by instrumented and non-instrumented (Stowell 2015).

 

There are also significant methodological concerns. With observational CER, it is possible to control for some confounding factors and not for others. Often a hybrid approach is required, where sophisticated automated analysis of thousands or millions of electronic records can be paired with a manual review of several hundred, to confirm accuracy. This technique was used successfully in a study looking at the link between rotavirus vaccine and intussusception.  This is a powerful technique that could also be extended to patient reported outcomes (Platt 2015).


Pragmatic Randomised Controlled Trials

In Pragmatic Randomised Controlled Trials, the design mimics routine clinical practice (Torgerson). This means relaxing exclusion criteria, not using placebos, accepting non-concordance with treatment and delivering care as it is delivered in the real world. It offers a measure of effectiveness that is generalisable (Helms 2002).

 

Participants pointed out that such studies lend themselves to being conducted within the Learning Healthcare System. For example, the EHR could be configured to randomise patients.

 

“Suppose that you are in clinic, about to start an SSRI, but you don’t know which one to go for. Why not allow the system to randomise the patient… the patient wouldn’t need to be contacted again [by the researchers] – all of the outcomes would be collected in routine data so it massively decreases the cost of doing an RCT.” 

 

This sort of study, that brings together research and clinical practice, would raise the sort of ethical questions around consent, which have been discussed in previous sections.


Clinical Trial Recruitment

There will still be a need for traditional RCTs in certain circumstances (Wallace 2015). Recruitment of sufficient numbers of participants is a challenge for researchers and patients often miss out on clinical trials from which they could benefit. EHR data can be used to identify patients who are suitable for certain RCTs. The IBM Watson team have demonstrated this ability in collaboration with the Mayo Clinic (IBM 2015). In the UK, the Clinical Record Interactive Search (CRIS) system, developed by South London & Maudley NHS Trust has been used to deliver similar functionality (Callard, Broadbent et al. 2014).


Conclusion

No participants claimed that the RCT is dead, but rather that other methodologies will be required if we are to bridge the evidence gap experienced by modern medicine. Observational studies can deliver useful results quickly at relatively low cost and they do not put patients at risk, through experimental exposure. The development of EHRs and rigorous outcomes measurement, offer the potential to accelerate the use of observational research. This may require the development of a new ethics framework.


Even when RCTs are still required, Learning Health Systems can help with recruitment, randomisation and data collection.


Many of these potential developments pose major training and workforce implications that will be discussed in the Implications section of this report.


Evidence

Professor Charles Friedman Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundProfessor Charles Friedman is Chair of the Department of Learning Health Sciences at the University of Michigan Medical School. He is the former Deputy National Coordinator and Chief Scienti

Learn More ⇛

Dr Caleb Stowell Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundCaleb Stowell is Vice President, Research and Development, at the International Consortium for Health Outcomes Measurement (ICHOM); and Senior Researcher at Harvard Business School. His role

Learn More ⇛

Dr Gerry Morrow Interview

post-icon

Author Dr Tom Foley

BackgroundDr Gerry Morrow is Medical Director of Clarity Informatics, who aim to improve patient care and outcomes using data and analytics.  Clarity offer a Quality Improvement Service (QIS) in

Learn More ⇛

Dr Jeff Brown Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundDr Brown is an Associate Professor in the Department of Population Medicine (DPM) at Harvard Medical School and the Harvard Pilgrim Health Care Institute.  He is Associate Director and

Learn More ⇛

Dr David W Bates Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundDavid W. Bates, MD, MSc, is Senior Vice President and Chief Innovation Officer for Brigham and Women’s Hospital. He is a practicing general internist and maintains his positions as Chief of

Learn More ⇛

Dr Rupert Dunbar-Rees Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundDr Rupert Dunbar-Rees is a GP by background, and Founder of Outcomes Based Healthcare. He trained in Medicine at Imperial College, gaining a degree in Orthopaedics from University College Lo

Learn More ⇛

Professor Richard Platt Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundProfessor Platt is Chair of the Harvard Medical School Department of Population Medicine at the Harvard Pilgrim Health Care Institute.  He has extensive experience in developing systems

Learn More ⇛

Dr Paul Wallace Interview

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundPaul Wallace, MD, is Chief Medical Officer and Senior Vice President for Clinical Translation at Optum Labs. Before joining Optum Labs, Dr. Wallace was senior vice president and Director of

Learn More ⇛

IBM Watson Site Visit

post-icon

Author Dr Tom Foley, Dr Fergus Fairmichael

BackgroundDr Eric BrownDr Eric Brown is Director of Watson Technologies at the IBM T.J. Watson Research Center, NY. Eric is currently working on the DeepQA project, advancing the state-of-the-art in a

Learn More ⇛

School of Health and Related Research (ScHARR)(Unversity of Sheffield)

post-icon

Author Tom Foley

Dr Clare Relton, Senior Research Fellow, University of Sheffield   How do you define a Learning Health System? One that learns from the healthcare it p

Learn More ⇛

Children and Young People’s Health Partnership (CYPHP)

post-icon

Author Tom Foley

Dr Ingrid Wolfe, Consultant in children's public health medicine and Programme Director of Children and Young People’s Health Partnership (CYPHP) Background t

Learn More ⇛

Cambridge University Hospitals NHS Foundation Trust (CUH)

post-icon

Author Tom Foley

Dr Afzal Chaudhry, Consultant Nephrologist, Chief Clinical Information Officer and Associate Lecturer, Cambridge University Hospitals In 2014, CUH became the first UK healthcare provider to implement

Learn More ⇛

What role for learning health systems in quality improvement within healthcare providers?

post-icon

Author Foley, Vale

AbstractIntroductionRecent decades have seen a focus on quality in healthcare. Quality has been viewed across 6 dimensions—safe, effective, patient-centred, timely, efficient and equitable. As IT has

Learn More ⇛
AbstractRandomized controlled trials have traditionally been the gold standard against which all other sources of clinical evidence are measured. However, the cost of conducting these trials can be pr

Learn More ⇛

Abstract
Pediatricians facing critical clinical decisions often lack data on which to draw. The authors recently put their institution's electronic medical record to unusual use to inform a dec

Learn More ⇛

Abstract
Widespread sharing of data from electronic health records and patient-reported outcomes can strengthen the national capacity for conducting cost-effective clinical trials and allow researc

Learn More ⇛

Abstract
Interest in learning health care systems and in comparative-effectiveness research (CER) is exploding. One major question is whether informed consent should always be required for randomiz

Learn More ⇛

AbstractBackground: Electronic clinical data (ECD) will increasingly serve as an important source of information for comparative effectiveness research (CER). Although many retrospective studies have

Learn More ⇛

Abstract
OBJECTIVES: To determine whether observational studies that use an electronic medical record database can provide valid results of therapeutic effectiveness and to develop new methods to e

Learn More ⇛

ABSTRACT
Purpose
The purpose of this study was to evaluate a statistical method, prior event rate ratio (PERR) adjustment, and an alternative, PERR-ALT, both of which have the potential to overc

Learn More ⇛

AbstractRandomised controlled trials are the most rigorous way of determining whether a cause-effect relation exists between treatment and outcome and for assessing the cost effectiveness of a treatme

Learn More ⇛

AbstractThe view is widely held that experimental methods (randomised controlled trials) are the "gold standard" for evaluation and that observational methods (cohort and case control studies) have li

Learn More ⇛

AbstractIn making treatment decisions, doctors and patients must take into account relevant randomised controlled trials (RCTs) and systematic reviews. Relevance depends on external validity (or gener

Learn More ⇛

AbstractBACKGROUND:
Few attempts have been made to estimate the public return on investment in medical research. The total costs and benefits to society of a clinical trial, the final step in test

Learn More ⇛

AbstractPopulation- and systems-based interventions need evaluation, but the randomized controlled trial (RCT) research design has significant limitations when applied to their complexity. After some

Learn More ⇛

AbstractCohort, cross sectional, and case-control studies are collectively referred to as observational studies. Often these studies are the only practicable method of studying various problems, for e

Learn More ⇛

AbstractOBJECTIVE:
In both the United States and Europe there has been an increased interest in using comparative effectiveness research of interventions to inform health policy decisions. Prospec

Learn More ⇛

Abstract“It is the position of this Task Force that rigorous well designed and well executed Observational Studies (OS) can provide evidence of causal relationships” [1]. All flows from this carefully

Learn More ⇛

AbstractAlthough the explanatory clinical therapeutic trial remains the foundation for assessing drug efficacy and is required for licensing purposes, the overall effectiveness of a treatment can be b

Learn More ⇛

Recent Comments
No Comments found...

Leave a Comment

Please login to give your comment

footer-logo