The Role of Experience in an Evidence-Based Practice


There is no moderator for this particular article.


  • Summary:
    The discussion focused primarily on:
    1) A critical evaluation of the value of evidence-based healthcare and the role it plays in patient centered medicine;
    2) the three main principles of evidence-based healthcare;
    3) the role that clinical research should play in clinical decisions;
    4) the importance of patient values and preferences;
    5) the value of personal “hands on” experiences and pathophysiologic reasoning;
    6) balancing clinical experience and pathophysiologic rationale with the results from clinical trials;
    7) customizing clinical decisions based on the individual patient;
    8) the NEW GRADE framework for evaluating the quality of evidence beyond traditional outmoded EBM.
    (Med Roundtable Gen Med Ed. 2012;1(1):75–84.) ©2012 FoxP2 Media, LLC
  • Compounds:
    No compounds discussed.
    No trials discussed.
Faculty Disclosure(s):
The discussants have no disclosures to report.
  • Publisher Disclosure:
    This content was developed entirely by the faculty who retained editorial control and volunteered their time, expertise and energy in the spirit of education, without compensation.
  • Clinical Implications:
    • Evidence based healthcare is a valuable tool, if it is applied in the context of patient values and patient preferences
    • Physicians need to critically evaluate clinical trials based on clinical experience and physiologic rationale
    • Clinical trials, while critical to the advancement of patient care, tell us about the "average" patient, while the clinician needs to apply the data to the "individual" patient by adding clinical experience
    • The original EBM Hierarchy is outmoded and new systems are being developed, such as GRADE guidelines
  • References:

    1. Echt DS, Liebson PR, Mitchell LB, et al. Mortality and morbidity in patients receiving encainide, flecainide, or placebo. The Cardiac Arrhythmia Suppression Trial. N Engl J Med. 1991;324(12):781–788.

    2. Tonelli MR, Benditt JO, Albert RK. Clinical experimentation. Lessons from lung volume reduction surgery. Chest. 1996;110(1):230–238.

    3. Guyatt GH, Briel M, Glasziou P, Bassler D, Montori VM. Problems of stopping trials early. Br Med J. 2012;344:e3863.

    4. Guyatt GH, Oxman AD, Schunemann HJ, Tugwell P, Knottnerus A. GRADE guidelines: a new series of articles in the Journal of Clinical Epidemiology. J Clin Epidemiol. 2011;64(4):380–382.

    5. Tonelli MR, Curtis JR, Guntupalli KK, et al. An official multi-society statement: the role of clinical research results in the practice of critical care medicine. Am J Respir Crit Care Med. 2012;185(10):1117–1124.

    6. Montori VM, Jaeschke R, Schunemann HJ, et al. Users' guide to detecting misleading claims in clinical research reports. Br Med J. 2004;329(7474):1093–1096.

    7. Montori V, Loannidis J, Jaeschke R, et al. Dealing with misleading presentations of clinical trial results. In: Guyatt G, Rennie D, Meade M, Cook D, eds. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 2nd ed. New York, NY: McGraw-Hill; 2008.

    Additional Reading

    GRADE Working Group. GRADE guidelines—best practices using the GRADE framework. Available at: Accessed July 31, 2012.

    Rubenfeld GD. Why we agree on the evidence but disagree on the medicine. Respir Care. 2001:46(12):1442–1449.

    Sackett DL. Evidence-based medicine: what it is and what it isn’t. Br Med J. 1996;312:71–72.

    Tonelli MR. In defense of expert opinion. Acad Med. 1999;74:1187–1192.

    Tonelli MR. The limits of evidence-based medicine. Respir Care. 2001:46(12);1435–1440.

    Tonelli MR. Integrating clinical research into clinical decision making. Ann 1st Super Sanita. 2011;47(1):26–30.

Please subscribe to use our print features or to download PDF files.

TMR: Drs. Gordon Guyatt and Mark Tonelli were invited to participate in this Expert Roundtable Discussion to address how they see the role of expert opinion changing in the evolving framework of the now outmoded “evidence-based hierarchy” popularized over the past 20 years (See Figure). Their discussion highlights certain aspects of the current Evidence-Based Medicine (EBM) conceptualizations and thinking. Dr. Gordon Guyatt is a distinguished professor in the Department of Clinical Epidemiology & Biostatistics and a member of the Department of Medicine at McMaster University in Hamilton, Ontario, has been writing and publishing on EBM for decades, and is a leader in the cutting edge discussion on the new Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) guidelines (see Additional Reading) for rating the quality of evidence.

Dr. Mark Tonelli is from the University of Washington Medical Center, Seattle, Washington, where he is a professor in the Division of Pulmonary and Critical Care Medicine, and adjunct professor of Bioethics and Humanities as well as the program director of the Pulmonary and Critical Care Medicine Fellowship. He has also been writing and publishing on EBM for decades, primarily regarding the limits of EBM for clinical practice and the value of experience in clinical decision-making (See Table Below). They discuss their ideas, thoughts and concerns in this Expert Roundtable.

FIGURE. While many "evidence hierarchy" charts such as the example at right were once well accepted showing the "best evidence" at the top and the "weakest" at the bottom, they are now anachronistic, and newer systems such as GRADE are gaining ground as the field evolves.

Dr. Guyatt: I think there are three principles of evidence-based health care: one is that some evidence is more credible, more believable. We have more confidence in some types of evidence than others. Second, we need systematic summaries of the highest-quality evidence available. And, thirdly, evidence by itself never tells you what to do. It’s always evidence in the context of values and preferences.

If I understand it, Dr. Tonelli, your criticisms have tended to focus on the first of those three principles.

Dr. Tonelli: I think that is correct. I would agree with you on the third principle absolutely. But let me start by saying that we need to be clear what we mean by evidence, as that term is used in a variety of ways. I think, in particular, if we’re talking about the results of clinical research as evidence, that clinical research itself is never sufficient for clinical decision making. Patient values and preferences are important, but I think other topics are also important, some of which you’ve acknowledged before. The individual circumstances of a case determine whether clinical research is applicable.

So, while I agree with your third statement, I disagree with the first, which I think supports the notion that there is a hierarchy of evidence, particularly one that would apply to clinical practice. But in fact, sometimes the randomized control trial is not more compelling than personal experience or pathophysiologic reasoning in clinical decision making.

Dr. Guyatt: I think it would be good to define the boundaries of our agreement and disagreement. Let me tell you three situations quickly and you can tell me whether you agree with the way the medical community has responded, because, to me, it does suggest something of a hierarchy. So, 20 years or so ago, the cerebrovascular surgeons were doing extracranial to intracranial bypass surgery for middle cerebral artery narrowing. Their personal experience was that patients did extremely well with this, much better than they used to, and they had a compelling physiologic rationale for it.

Randomized trials were subsequently performed and suggested that there was no benefit, and, in fact, some harm, associated with the usual complications of the surgery. More recently, encainide and flecainide were two drugs that virtually obliterated asymptomatic arrhythmias. The cardiologists’ experience with it was excellent. They had a very powerful physiologic rationale that even persuaded the Food and Drug Administration to license the drugs before randomized trials. The randomized trials were still performed, and encainide and flecainide were found to increase rather than decrease arrhythmic deaths.

Finally, when I was training in medicine, when you had a patient with heart failure, beta-blockers were contraindicated with again a compelling physiologic rationale and clinical experience. Thirty years later, randomized trials have suggested that they are the most powerful agent we have in terms of reducing mortality in patients with heart failure. So, in those three instances, we had clinical experience and physiologic rationale that suggested one course of action, and randomized trials that suggested another. The clinical community seems to believe that the randomized trials have trumped the physiologic rationale and clinical experience and I wonder whether you would agree.

Dr. Tonelli: I would agree with that. I think those are three examples that show up a lot in this debate, the Cardiac Arrhythmia Suppression Trial (CAST) in particular,1 that people like to use to say that mechanistic reasoning or pathophysiologic rationale is untrustworthy. I think those randomized controlled trials were well designed to answer the question of whether or not the interventions should be routine care, do they actually produce the benefits we think they do. Those are all appropriate and informative studies. In fact, I argued vehemently years ago that lung volume reduction surgery, which had both the pathophysiologic rationale and some local clinical experience in St. Louis, should be the subject of a large controlled study, because you do not want to routinely provide a service that doesn’t benefit patients.2

So, in answering those questions, I agree. The randomized controlled trials are very helpful.

I want to switch the perspective, though, because I think it’s very important to understand my concerns about an individual clinician who’s facing a decision about the care of an individual patient disregarding pathophysiologic rationale and clinical experience in deference to clinical research. I do intensive care unit (ICU) medicine, so I’m sorry that those are going to be a lot of my examples. For instance, low tidal-volume ventilation for acute respiratory distress syndrome, has been demonstrated to be beneficial in large, randomized trials, and yet the patient in front of us may not be responding in a way that we would expect. There may be profound hypoxemia that I can correct with a small increase in tidal volume, and I would say I’m going to disregard, or at least put aside for the moment, the results of excellent studies that suggest I use 6 mL/kg tidal volume in this patient, and I’m going to go up to 8. Otherwise I cannot oxygenate this patient and this patient is going to die or this patient is having arrhythmias that go away when I do that. So, the perspective that I’m arguing from is that of the clinician, who should be able to still use pathophysiologic rationale and personal experience in making decisions about individual patients. I am not talking about public health policy decisions, where I agree with you that such policy decisions are often well-informed by randomized controlled trials.

Dr. Guyatt: Well, as I suspected, I think the disagreements between us are perhaps relatively minor and a matter of emphasis, but we’ll continue to see. So, first of all, to the extent that you don’t agree with the hierarchy of evidence, in certain instances at least, it seems that you do believe in a hierarchy. In the situations we introduced earlier, you believed that when you had physiologic rationale and clinical experience that was contradicted by the results of clinical trials, the clinical trials do at least in some of those circumstances trump the prior clinical experience and physiologic reasoning.

Dr. Tonelli: I agreed that in some situations, both in clinical practice and more broadly, clinical research will be more compelling than a pathophysiologic argument or personal experience, but that in no way actually suggests a hierarchy because I don’t think in all cases that a randomized controlled trial trumps pathophysiologic reasoning or clinical experience. In fact, there are multiple examples, as you’re well aware, of initial randomized controlled trials that seem to suggest that an intervention is beneficial where often there were pathophysiologic or experiential concerns that, low and behold, long-term, turns out that that intervention is not beneficial. I think activated protein C in patients with sepsis is a classic example of that. So, just because there are examples where pathophysiologic reasoning has not won out over randomized controlled trials, there are also times when randomized controlled trials subsequently are demonstrated through other empiric research to have been misleading and that people who voice the concerns based on pathophysiology and experience were raising appropriate concerns.

Dr. Guyatt: Well, two things with respect to the example. We’re just about to have a paper published in the British Medical Journal3 that suggests that the reasons for the concerns over activated protein C should have been that the trials were stopped early for benefit. So, there may have been, and to my understanding there were, many people—and tell me if you disagree with this—there were a number of prior agents that were used, and there were many people within the intensive care unit community, a lot of disappointment with the ultimate trial that showed that it was of no benefit, who found the physiologic rationale behind the use of activated protein C very compelling.

Dr. Tonelli: I wouldn’t say that a lot of people found it compelling. I think that the physiologic rationale behind it was, for many of us, minimally supportive, and in fact, as you point out, I think a lot of us looked at that first trial, looking at all of the previous trials that have been done, with a prior probability that single interventions for sepsis were highly unlikely to be beneficial. I do think that clinicians' background knowledge plays a big role in how they interpret findings, particularly individual pieces of clinical research.

Dr. Guyatt: There was at least disagreement about the physiologic rationale. Certainly the company that spent a lot of money developing the drug believed there was an underlying physiologic rationale, but at any rate, it seems to me that maybe we have a semantic disagreement. First of all, I’d agree that there are many reasons not to trust randomized trial findings, especially early trials of an intervention. A lot of my current writing is about reasons not to trust randomized trials, and the GRADE framework4 that I’ve helped develop identifies categories of problems including imprecision, inconsistency, indirectness in terms of applicability to the population, which I think is something that you would emphasize, along with publication bias as well as risk of bias. We’ve written, as I alluded to earlier, about the big problem of stopping trials early and about early results that are too good to be true.

So, for sure, there are lots of reasons to be skeptical about the results of randomized trials, but it seems to me that we perhaps have a semantic disagreement about what we mean by a hierarchy. There’s at least a collection of circumstances in which when they come head-to-head, it seems that you agree that the results of randomized trials would trump prior physiologic reasoning and clinical experience and you seem to make the case that since it doesn’t always do that, that you’re unready to call it a hierarchy. It seems to me that since it very often does that when put head-to-head, I’d be ready to call it a hierarchy. So, perhaps it is a subtle semantic distinction we have here.

Dr. Tonelli: I think maybe a little beyond that. There are a couple of other reasons why I think the hierarchy doesn’t make sense. One is that I think when we talk about clinical research and pathophysiologic rationale and personal experience that those are three different types of medical knowledge, not variations of the same thing. So, they can’t really be placed in a hierarchy. I do think there are some times when the randomized control trial clearly doesn’t trump my pathophysiologic reasoning. For instance, a study of homeopathy that suggests that it’s beneficial or a study of retroactive intercessory prayer that suggests that it’s beneficial is just not going to be compelling. Those studies are not going to overcome my pathophysiologic understanding of how illness works.

Since each of the three types of medical knowledge is different in kind, they don’t belong on a hierarchy. What clinicians are left with in caring for individual patients is trying to consider each of those types of knowledge and weighing them. Sometimes the clinical research is very compelling and other times it is not. Frankly, quite often, as I think both of us would agree, these things line up nicely. Our personal experience, our pathophysiologic understanding, and the clinical research, all line up and it’s very compelling, making decision making easy.

It’s those times when they don’t that I have a concern with a hierarchy. The idea that a poorly done observational trial should trump my personal experience every time, I don’t think is a reasonable argument.