This is a well-written, fascinating book that will enlighten anyone interested in the Philosophy of Medicine or the Philosophy of Science. Few philosophers -- or indeed academics -- write as clearly as Stegenga, in a way that is both rigorous and understandable to the non-specialist. Stegenga's book falls within a tradition that includes Illich's Medical Nemisis,  John Ioannidis' 'Why Most Published Research Findings Are False',  and the BMJ's 'Too Much Medicine' campaign.  His overall thesis is based on his Bayesian 'Master Argument', namely that the prior probability of a treatment being effective is very low. To support this, he offers a sustained critique of medical evidence.
The book begins by clarifying some concepts, including the disease concept, effectiveness, and 'magic bullets' (Part I). Then, in Part II he offers a critique of current methodologies used to evaluate medical treatments. Here he is critical of evidence hierarchies and meta-analyses in favour of more complex quality assessment tools, and systems for evaluating strength of associations that are more pluralistic to include, for example, evidence from mechanisms. All of Stegenga's arguments come together in Part III, where he reviews numerous types of bias that make medical evidence 'malleable'. Here he shows that manipulating evidence (often by introducing 'hidden biases' that can be difficult to detect, Part V) and a 'hollow' hunt for adverse events (Chapter 9), lead to overestimating treatment benefits and underestimating their harms. All this points to his nihilistic conclusion that we should have low confidence in treatment effects. Instead of opting for treatment, he suggests we opt for 'la medicine douce' (gentle medicine), which he defines as a 'form of therapeutic conservatism'.
I think Stegenga's Master Argument is cogent and that adopting 'la medicine douce' would improve our health and save us a great deal of money. With that in mind, the following comments on the book are to be interpreted as relatively minor -- yet I hope interesting -- points. They should not be adduced as evidence against my view that the book is an excellent 'must read' for philosophers of medicine and philosophers of science.
1. Mistaken claim that 'meta-analyses' are at the top of the evidence hierarchy
Stegenga insists in numerous places that meta-analyses lie at the top of evidence hierarchies. For example, he writes: 'it is in fact meta-analysis (or systematic reviews that typically include meta-analyses) that is at the top of the most prominent evidence hierarchies. . . In what follows I criticize this assumed status of meta-analysis' (p85). Now it is true that some people confuse the terms 'meta-analysis' and 'systematic review'. However, it is the job of the philosopher to disambiguate the two distinct yet confused terms. This disambiguation is done very well in a recent James Lind Bulletin paper . 'Meta-analysis' is a term introduced by Gene Glass in 1976 and means statistical pooling or results. It does not require that the studies included within the meta-analysis were systematically identified. All one needs for a meta-analysis is to statistically combine the results of more than one study -- even if the studies were cherry picked or indeed chosen at random. Nobody would claim that such a study -- a meta-analysis -- provides 'best' evidence.
Also, meta-analyses are not considered to offer the best evidence by any system for ranking evidence I'm aware of, or by the Cochrane Collaboration. For example, Stegenga cites the Oxford Centre for Evidence-Based Medicine (OCEBM) 'Levels of Evidence' as a typical 'hierarchy'. Yet the Oxford 'Levels' do not even mention the term 'meta-analysis'. Rather, systematic reviews are considered to offer 'best' evidence. Sometimes pooling the results of a systematic review in a meta-analysis can raise the quality of evidence, and at other times it can lower the quality. Indeed section 9.5.3 of the Cochrane Handbook  states: 'A systematic review need not contain any meta-analyses . . . If there is considerable variation in results, and particularly if there is inconsistency in the direction of effect, it may be misleading to quote an average value for the intervention effect.' Hence it is unsurprising that many of the systematic reviews within the Cochrane Library do not include a meta-analysis.
To remedy this problem, the reader can simply replace the term 'meta-analysis' with 'systematic review' and most of Stegenga's arguments about the malleability of methods (other than those specifically about statistical pooling) still hold.
2. Focus on outdated 'hierarchies' of evidence leads to a straw man
In Chapter 5, Stegenga argues that hierarchies of evidence are problematic. To support his view, he takes an outdated version of the OCEBM Levels of Evidence as a starting point. This seems unfair since the term 'hierarchy' was explicitly dropped in the most recent 2011 OCEBM Levels of Evidence document. Moreover, the most widely used current system for grading evidence is the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) system. GRADE is used by the World Health Organization, and the Cochrane Collaboration, so it should have been a more reasonable starting point that would have averted my worry that his argument is a straw man. Stegenga might have either acknowledged that his paper is historical (about previous 'hierarchies'), or (this would be more relevant to current medical practice) have begun with the GRADE system. Paradoxically, starting with GRADE would have supported Stegenga's proposal for 'la medicine douce'. This is because evidence is rarely rated as high quality according to GRADE. Because of this, using GRADE imposes skepticism about the evidence from most systematic reviews, lending support (albeit based on very different assumptions) for Stegenga's Master Argument.
3. Do (systematic reviews of) randomized trials deserve epistemic privilege on Stegenga's view?
Stegenga's position on meta-analysis, and by implication evidence-based medicine, is ambiguous. Let me explain this by appeal to an example. A large body of observational data and mechanistic evidence suggested that neuraminidase inhibitors were very effective for treating influenza.  Based on this evidence, the UK government (and others) stockpiled billions of dollars worth of the stuff. Meanwhile, evidence-based medicine researchers in the UK, Italy, and Australia, produced a systematic review and meta-analysis showing that neuraminidase inhibitors barely outperformed placebo, and had serious adverse events.  I was part of the review team, and it involved fighting to obtain data from the pharmaceutical companies who did the trials, then trolling through 160,000 pages of data that industry dumped on us. Most people without financial conflicts of interest accept the result from our less biased systematic review and meta-analysis.
On page 63 Stegenga himself cites the similar example of rosiglitazone, whose effectiveness was supported by observational and mechanistic evidence (and even randomized trials), yet was exposed by a systematic review as having serious harms. This seems to imply that Stegenga himself agrees that systematic reviews offer a good way to resolve controversy, and hence that the systematic review methodology, when properly used, does in fact provide good evidence. Yet Stegenga's view, which is anti-meta-analysis and in favor of more types of evidence (like evidence of mechanisms), makes his position here difficult to maintain. On the one hand, he seems to agree that in difficult cases, independent systematic reviews provide the right answer, yet on the other hand he claims that they are hopelessly flawed (so badly flawed that he states in Chapter 12 that they can't be fixed), and promotes a more pluralistic account of evidence.
He might, of course, avoid the difficulty by pointing out that conflicts of interest were the deciding factor. This is because, in the examples above, independent researchers conducted the systematic reviews, while the other evidence was produced largely by researchers with conflicts of interest. However, if the properties of the underlying method didn't matter, then we should be able to find an example of the opposite arising. That is, an example where a systematic review of randomized trials suggested that a treatment was effective, but that a combination of mechanistic and observational data suggested it did not. I am not aware of any such examples. And they would be useful for clarifying this epistemic point about the value of different methodologies.
Beyond ambiguity, this problem points to a problem with Stegenga's positive proposal for evidence -- or rather lack thereof.
4. Surely it is possible to have good evidence?
What is missing from Stegenga's account is a positive proposal explaining what it would take to establish that a treatment actually worked (well). He does not offer an explicit alternative to current methodology, but only hints at one. He does this by recommending the more pluralistic 'Hill Guidelines' (Chapter 6), and he implies that we need a better 'theoretical basis' for treatments. Yet, as pointed out above, even he appeals to well conducted systematic reviews for areas of controversy. Hence one might say that current methodologies are not inherently flawed, but (like most things) can be abused and corrupted. The solution, on this view, would be to employ current methods more sensibly. In the conclusion, however, Stegenga rejects the strategy of using current methods sensibly. However, his own appeal to systematic reviews in areas of controversy calls into question his inference from 'there are problems with meta-analyses' to 'we can't fix them'. This is related to a point Bennett Holman makes elsewhere. 
5. A carpenter blaming his tools?
There is deeper reason to be more skeptical of Stegenga's rejection of current methodology, albeit one that ironically also supports his conclusion. It could be that the methods are good enough, but that recent medical treatments simply don't work very well, making it next to impossible for any method to detect their effects. Stegenga notes the failure of newer treatments like statins to be supported by good evidence in the introductory and concluding chapters. However, Stegenga does not consider the implication of this for his arguments, adopting an epistemological rather than an ontological perspective. For example, the methods used to detect statin effects might be quite good, but the statins themselves just might not be effective. In the context of complex human health where variance and confounding cannot be avoided, if the intervention under test simply doesn't work very well, then it will be next to impossible to consistently detect its effects.
Stegenga's methodological approach remains useful if the problem is with the newer treatments rather than the methods. This is because, as he shows, the methods are tweaked to exaggerate the small benefits and hide the harms of newer treatments. Still, even the corrupted methods Stegenga critiques only show that new drugs like statins have tiny absolute benefits.  And if these corrupt methods still fail to show any clinically meaningful benefit of these enormously profitable drugs, this seems to suggest that the methods are quite good (otherwise they would have shown that the drugs have larger, more meaningful effects).
And the epistemological and ontological perspectives are not mutually exclusive: it could both be the case that (a) newer treatments aren't that effective, and (b) methods don't suffice. However, if the main problem is the ontological failure of new treatments to be effective, then the conclusions about failed methodologies are, like the current hunt for harms of medical interventions, hollow. The fact that Stegenga himself appeals to a systematic review to confirm a controversial case (see above) lends credence to the view that current methodology may not be as ill-fated as he would have us believe. It's just that they need something more effective to detect.
* * * * * * * * * *
The general conclusion -- that we should opt for less invasive medicine -- is a pill I believe we should all swallow, and that Stegenga supports very well. I anticipate a philosophical consequence of the book that I hope Stegenga spells out in his future work. It is possible that, as Feyerabend did with Kuhn, one might infer from reading the book that nothing works. Stegenga explicitly resists such a position, but his philosophical arguments make it difficult for him to do so. On page 5 he states: 'There is no place I would rather be after a serious accident than an intensive care unit. For a headache, aspirin; for many infections, antibiotics; for some diabetics, insulin -- there are a handful of truly amazing medical interventions'. Given Stegenga's position on the current state of evidence-based methodology, and his lack of a positive proposal, even his position is that some things work. To wit, a Cochrane Review failed to find high quality evidence that aspirin was a good cure for tension headaches.  If there is a lack of good evidence for one of Stegenga's staple treatments, it is legitimate to ask how we can know anything about medical interventions.
Medical Nihilism is an entertaining must-read for anyone interested in a critique of current medical methodologies, and anyone interested in the Philosophy of Medicine.
5. Clarke M: 'History of evidence synthesis to assess treatment effects: personal reflections on something that is very much alive'. JLL Bulletin: Commentaries on the history of treatment evaluation 2015.
8. Jefferson T, Jones MA, Doshi P, Del Mar CB, Hama R, Thompson MJ, Spencer EA, Onakpoya I, Mahtani KR, Nunan D et al: 'Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children'. Cochrane Database Syst Rev 2014(4):CD008965.