From the heights of analytic philosophy, the epistemic battles in the house of medicine must seem quaint. They are not. They are real and highly consequential, pitting different factions within medicine against each other, asserting the primacy of their method or approach. Medicine is riddled with different ways of knowing and evidentiary hierarchies about data and discovery, a proverbial Babel of approaches, which has implications for clinical care, research, health policy and medical education.
In her book, Miriam Solomon provides an account of this epistemic cacophony, attempting to explain how theories of knowledge inform medical practice, research and health policy. Utilizing methods from the philosophy of science and history of medicine, Solomon traces how medical practice has been transformed over the past several decades as professional expertise and experience have been replaced by more systematic sources of knowledge. The arc of her analysis traces the rise of the consensus panel, the emergence of evidence-based medicine, the move to translational medicine and the corrections of a medical humanism under the aegis of what has been called narrative medicine.
This epistemic journey is a fascinating story, and Solomon tells it admirably as a history of ideas, all the while demonstrating the paradox that even as medicine sought to become more objective, and in large part did, the subjectivity of the proponents of one school or another informed the political context in which these debates took place. Who ever said that science was neutral?
Solomon's story starts with the emergence of the consensus panel, an initiative begun by the National Institutes of Health (NIH) Consensus Development Conference Program in 1977. This program, which lasted until 2003, sought to bring coherence to medical practice by standardizing practice through august expert panels, whose deliberations would help disseminate new knowledge, just emerging in the wake of the tremendous growth in biomedical research after the Second World War with the establishment of the NIH. Despite this spectacular progress, Congress expressed concern that biomedical advances, supported by federal funds, were not reaching the clinic and patients in need.
Instead of a more scientific approach, it was business as usual. Medicine continued being practiced with practitioners appealing to anecdotal experience and their knowledge of anatomy and physiology. While this method was not without its virtues, it could lead to rather idiosyncratic and outdated practice. More standardized approaches were needed, especially when new discoveries were having difficulty gaining traction in the clinic.
To remedy this, enter the consensus panel at the NIH in 1977. It made good sense: have the experts decide on the best approaches and have their judgment direct and inform the care provided by less skilled or informed doctors. The standing of these experts would have authoritative appeal and help catalyze dissemination of new knowledge, improving care. These bodies would resemble Arthur Kantrowitz's "science court", adapted here for government use. The distinction: the NIH panels would seek consensus where Kantrowitz's science courts sought unbiased decision judgment.
The consensus model had an odd mix of influences. Beyond the science court, it drew inspiration from: courts of law with testimony, judges and majority and minority opinions; the peer review process with rigorous evidentiary standards; neutral scientific meetings; democratic town meetings; and expert peer review panels. In their formation these panels were organized to achieve a balance of powers akin to three branches of government. Meeting planners, speakers and panelists each had a role to play in the deliberative process. The model also borrowed from collective bargaining and technology assessment. Given its heterogeneity, and over the course of the evolution of this program, it is hard to say whether there was a paradigmatic consensus process or many variations on a broad theme.
The purview of consensus panels were limited to making objective judgments without the confounding of any messy ethical or political considerations. "Technical" consensus panels would address objective questions as distinguished from "interface" panels which would address broader societal issues. And ironically, despite their name, the NIH Consensus Development Conference panels were not independent of the research community. They neither discovered new truths, which cannot be the subject of negotiation or consensus (as can questions of public policy), nor persuaded the scientific community over areas of disagreement. How could that be achieved over a three-day meeting, especially when uncertain results needed additional data to address unknowns? Even august panels cannot determine scientific truth if the evidence is not there.
Instead, as Solomon observes, the consensus panel was best positioned to help catalyze the dissemination of what was known, or was thought to be known, and to give its endorsement to agreed upon practices that had yet to make their way into practice. It was less about reaching a consensus than getting the word out as a trustworthy source of information for a broader clinical and public audience. Instead of creating new knowledge, consensus panels through "epistemic rituals" (61) "conferred a stamp of epistemic approval on results of research and translated the results" (59) already agreed upon.
But even when playing this more limited role something was amiss. The consensus process was limited by biases such as groupthink, testimonial injustice and the risk of prematurely settling open scientific questions. And then there was the premise of the authority of the panelists: even the experts were not so expert, or at least not equally so. They weren't all equally informed or drawing upon the same data sources. In the aggregate these deficits made their rational deliberations impeachable.
As a remedy, by the late 1990's consensus panel staff began to prepare systematic bibliographic reports for the panelists so they would have a common evidentiary base. It was a strategy reminiscent of the late Daniel Patrick Moynihan's most oft-cited quip that each of us are entitled to our own opinions, but not to our own facts. And it raised the question, once the experts had the facts, what was left to decide? Won't the facts themselves be dispositive?
The important question then hinged, not on what experts thought, but rather how one assessed the quality of information and determined, at a more fundamental level, what counted as a fact? These were questions that the nascent field of evidence-based medicine (EBM) was positioned to answer. Birthed at McMaster University by Gordon Guyatt and colleagues in the early 1990's, and having roots in clinical epidemiology and the work of pioneers like Archibald Cochrane and Iain Chalmers, EBM became a powerful international epistemic movement.
EBM relies upon data from randomized clinical trials (RCT), hailed as being the methodology least likely to produce biased results. With proper blinding, the RCT -- it was argued -- could distinguish a medication from a placebo effect and overcome selection, salience, availability and confirmation biases.
EBM had a transformative effect on how medicine was practiced. Consider a simple example from my own experience. As a medical house officer, I was taught that we could convert atrial fibrillation, an aberrant heart rhythm, to a normal one with the use of a drug called digitalis. We used it regularly and sometimes it worked. When it did, it was rather heroic: we were young doctors prompting a cure (!) with our ministrations. We ascribed causality to our actions and the efficacy of digitalis.
What we did not appreciate, but what EBM revealed, was that some episodes of atrial fibrillation reverted to normal on their own, with or without digitalis. What we had observed was coincidental not causal, a random event. EBM revealed that digitalis was no better than placebo. So much for anecdotal experience or physiologic reasoning about the effects of digitalis on the heart, which hypothesized that it would slow conduction across the upper chambers. No matter the speculations, if the data did not sustain our physiological ruminations as an outcome.
With legions of examples and growing international databases like the famed Cochrane Collaboration, EBM created an evidence-based hierarchy about the quality and reliability of medical information. At the top of the list were blinded randomized clinical trials, followed by prospective and retrospective observational trials, cohort studies, case control studies, case series, case reports and anecdotal evidence. Near the bottom of the list were expert opinion (the death knell of the consensus panel) and pathophysiological reasoning of the sort that made medical trainees of my generation believe that digitalis could convert atrial fibrillation.
EBM, as empiric medicine, has helped to standardize and protocolize medical care, prompting patient safety and better outcomes. It has transformed how doctors think, or at least should think, and deeply informed the culture of medical education. Medical residents now talk about the latest clinical trials, when considering diagnostic strategies or emerging treatments. It has been a boon to modern medicine, but it has not been, as Solomon notes, as methodologically sound as its proponents asserted.
In a stunning chapter entitled, "The Fallibility of Evidence-Based Medicine", Solomon argues that EBM is more fallible than generally acknowledged. She argues that RCTs are not without bias and do not always provide reliable data because of differences in trial design, study populations, publication bias, time to publication, funding mechanisms and other variables. In addition, she challenges the implied argument that a hierarchy of absence of biases, with the strongest EBM methodology on the top, can be equated with a hierarchy of reliable evidence. Bias carries a potential for error, not necessarily the actuality of error, with differing types of bias having differing magnitude of error.
Solomon's cogent appreciation of EBM's limitations and fallibility is especially necessary for the realm of health care policy. She trenchantly observes, "evidence-based medicine plays a hegemonic role that may not always be justified." (133) This line was worth the price of the volume!
Despite its absolute dominance of health policy, EBM has had its critics among those in clinical practice who pejoratively describe it as "cookbook medicine," depriving them of the opportunity to use their expertise, experience and physiological reasoning. Many practitioners worry that applying population-based data to actual patients does not always accommodate the needs of individual patients. For example, if there was an evidence-based rationale to treat condition x with intervention y, but the patient had a confounding condition z, which might introduce a complication or even a contradiction to the equation, how is a practitioner to decide? Besides, the clinical reasoning went, no two patients were alike.
From my perspective as a sometime clinical investigator, EBM's greatest liability is its discounting of physiological and scientific reasoning. This has always been important at the frontiers of new knowledge. There is a need for hypothesis generation when the evidence ends. Along those lines, EBM's dichotomization between discovery and evaluation is a false one. As Solomon notes, "we do not have the option to use evidence-based medicine everywhere and thereby get more reliable and less fallible science. Evidence-based medicine cannot get going unless a therapy is proposed." (124) Basic science, animal research and Phase I and Phase II safety and efficacy trials all precede Phase III trials when EBM evidence can first be generated. Moreover, a mechanistic approach becomes even more critical when a trial fails and the clue to discovering the source of failure often resides in understanding basic scientific mechanisms. This can be the predicate for new hypothesis generation. These limitations make EBM an incomplete epistemology and one dependent upon the basic sciences, which it so derisively discounts.
And here is a central point that Solomon makes throughout the volume, there must be a plurality of methods to use and make medical knowledge. Here the distinction is between implementation and discovery. EBM is better positioned to confirm or discount treatments that have made it to the clinical arena. It is ill-suited to the development of new ideas, which by their nature are oppositional to prevailing practices. To break out of the proverbial Kuhnian paradigm is to oppose current norms, to see their flaws, or possibilities. EBM is not designed to prompt this critical leap from basic science to clinical possibility.
This gap in medicine's epistemology is addressed by the emergence of translational medicine, an approach that seeks to move ideas from bench to bedside and translate basic science hypotheses into potential clinical practices. It works bidirectionally in the space between basic science and discerning clinical observation to generate discovery and next generation science.
Translational medicine gained political traction when it was adopted as part of the 2004 NIH Roadmap for Medical Research establishing Clinical Translational Science Centers at the nation's leading academic medical centers. The goal was to help overcome what has been aptly described as "the valley of death" (160) between proof of principle Phase I studies and formal clinical trials.
If translational medicine was science's reaction to the over-reach of EBM, the reaction of the humanities was, in Solomon's view, through the emergence of what has been described as narrative medicine. This approach prizes the stories of patients and those who care for them, using literary methods of analysis. Through thick descriptions, individual patients emerge out of the fog of guidelines as diseases are understood as illness that have a bearing of people who are sick, suffering or in recovery. It is the humanist's response to the quantitative or scientific impersonality of both EBM and translational medicine.
And beyond the poetry, narratives have a clinical utility, helping practitioners take a proper medical history, rich in detail and sound in chronology or, to view it narratively, plot. This exercise is the predicate to diagnostic reasoning, which if incomplete could mean the misapplication of a treatment found sound by EBM. (Too bad it was the right treatment for the wrong patient.)
Sadly this relationship between the patient's individual story and statistical reasoning still eludes many EBM proponents despite David Sackett's wise admonition linking population data to the care of individual patients. As early as 1996, he maintained that
Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. (144)
It is curious that Solomon would choose narrative medicine, a rather fringe academic element, as the representative of humanist thinking in medicine. I think this choice weakens the volume and is paradoxical because the very analytical arguments Solomon uses to deconstruct medical knowledge would have been my choice as the fourth element in her epistemology. While her critique may seem orthogonal to making medical knowledge, it is, as this volume forcefully shows, central to understanding and improving this complex, yet worthy, endeavor. As such, her critique, more properly grounded in the history or philosophy of science, or in the broader realm of bioethics, would have been a far better humanistic counterweight to other epistemic approaches. But this is a quibble. Making Medical Knowledge is a valuable contribution that carefully untangles important epistemic questions in medicine.