This is an aptly-titled volume of new essays in philosophy of science on the theme of scientific collaboration. Many of the papers provide formal models, some concern the details of science as it is actually practiced, and some do both.
Despite the formal nature of some of the papers, I found them all to be readable. A few have appendices which elaborate on the mathematics or prove lemmas and which I confess skipping over. I am not an expert in the specific methods or models, but I was able to follow the papers.
A general worry about formal methods is that they can fail to connect with science as it is realized in actual communities. As the editors note in their introduction, formal "models exhibit possible mechanisms" (p. xvi). This means, as Kevin Zollman comments about his models, "the results are suggestive rather than definitive" (p. 66). Other authors are similarly cognizant of the worry. Formal models alone show how scientific collaboration could operate, given the assumptions made in constructing the models, but not necessarily how it does operate. Some authors accept this and settle for offering proofs of possibility, while others situate their models with respect to empirical research.
The converse worry about empirical methods is that they can descend so far into the details that it becomes impossible to glean any philosophical lessons. However, the more empirically informed articles in this volume make use of facts about how actual science is organized to raise philosophical issues and motivate general concerns. The examples are carefully chosen to address broader issues.
Altogether, the volume exemplifies the kind of careful work which manages to simultaneously be rigorous and philosophically interesting. I will briefly discuss each of the contributions, taking them somewhat out of order and engaging with some specifics along the way.
Michael Strevens offers what he bills as a Hobbesian defense of the Mertonian norm of communism. Sociologist Robert K. Merton famously claimed that science is characterized by "communism" in the sense that knowledge is owned collectively. Scientists get credit for providing knowledge, but they have to share. Strevens' defense of sharing is Hobbesian in the sense that it treats scientists as self-interested. From an epistemic state of nature in which everyone keeps everything secret, each individual is well-served to enter into a pact in which knowledge is shared.
Staffan Angere and Erik J. Olsson model a community of agents who can do their own individual inquiry but who can also share their beliefs with one another. If communication is too frequent, then incorrect views can come to dominate by being repeated loudly and often. As they write, "inquirers can spam the network" (p. 51). This dismal result can be averted if scientists are not allowed to signal unless they have some new reason for the claim they are sharing -- in the simulation, if agents signal only if they have done some new inquiry of their own.
Angere and Olsson suggest that, as a consequence of their model, "scholars should, in the interest of science as an institution, be increasingly self-critical when deciding whether to publish or not" (p. 56). This is reflected in their provocative title, "Publish Late, Publish Rarely!" I worry that this is overreaching. Actual scholars can offer reasons and context, unlike signaling agents in the model who can only announce claims. For scholars who publish often but are clear about when they are applying results presented in earlier papers, the audience won't take each publication as a separate reason to believe the results. So the strong conclusion of the title does not seem to follow. Nevertheless, the model reveals the danger in a mendacious scholar who republishes results with the pretense that repetition is novelty. Such cases of self-plagiarism, the model suggests, can undermine the epistemic community.
Kevin Zollman models a community of agents who can pursue inquiry on their own or who can collaborate by sharing ideas. In a community with many connections, one agent can profit from the ideas which a collaborator has gotten from a third party. Since each collaboration comes at a cost, it is best for each agent to have a small number of well-networked collaborators. Zollman explores what happens when the parameters take on different values: when the cost of collaboration and the size of the community vary. In the model, reducing the cost of collaboration makes it less likely that the community will arrive at the optimal arrangement but tends to improve the outcome for most scientists. Even though the outcome is less likely to be optimal, the suboptimal 'good enough' outcome is better. Similarly, a larger community is less likely to achieve an optimal outcome but will still probably do better than a smaller one.
Justin Bruner and Cailin O'Connor model how collaboration might develop in the presence of inequality. Their models are in some ways like Zollman's, except that they allow for systematic differences between agents. Bruner and O'Connor both review earlier work and discuss new models. The earlier work treats the community as comprised of a majority and a minority. Even without any disadvantage for minorities being built into the model, social dynamics can lead to minorities receiving less from collaborations. The new work addresses situations in which different agents stand to pay different costs or reap different rewards from collaboration. Bruner and O'Connor have in mind the difference between a tenured professor and a graduate student or postdoc, and they make an attempt at the end to connect their models with science as it is actually practiced. They suggest, on the basis of their models, that "norms which disadvantage the vulnerable are likely to naturally emerge in the absence of formal or explicit rules regarding how credit should be allocated" (p. 154).
Ryan Muldoon reflects on his earlier work (with Michael Weisberg) on the division of cognitive labor. Although he does not offer a new formal model, he considers how models might be modified to include collaboration. Whereas the earlier models presumed "that scientists have more-or-less fixed questions and can easily acquire new skills", Muldoon argues that collaboration often arises precisely because "scientists have a fixed set of skills and search for problems that their skills enable them to solve" (p. 79). With this in mind, he distinguishes cases in which scientists already working in a field cooperate (because each has skills that the other lacks) and cases in which scientists leave one field and enter another. As an example of the latter type, consider physicists who have applied their skills in modeling to problems in neuroscience (p. 90). Within-discipline collaboration and cross-discipline "colonization" are responses to the same pressures, but they are different phenomena. Models will ultimately need to reckon with both.
Two of the contributions are about authorship. Bryce Huebner, Rebecca Kukla, and Eric Winsberg argue that group authorship is impossible in radically collaborative science. Their argument turns on the details of some specific examples: climate science and large-scale, multi-site biomedical trials. They argue that collaborators in these cases necessarily rely on local expert knowledge and context-bound judgment. Moreover, for reasons familiar from the literature on science and values, the judgments of individual collaborators partly reflect assessments of the costs or benefits of drawing particular inferences or reporting particular results. Inevitably, some of these choices and judgments are tacit. Although Huebner et al. suggest that this threatens "our ability to . . . decide when to trust the results that are reported in radically collaborative publications" (p. 96), I take it they are not recommending scepticism. Instead, the worry is that the loss of authorship undercuts our ability to respond to problem cases. They write, "If the represented results are challenged, there may be no single justificatory story to be told . . . [and] . . . there is no reason to believe that the group collectively can be held accountable for the finished product" (p. 107).
In his contribution, K. Brad Wray provides a different argument for a similar conclusion. He considers the authorship practices in various collaborations and disciplines, as well as the review and publication of findings. Just as no individual is responsible for the product of large-scale collaboration, individual referees may be in no position to fairly assess it.
Although Huebner et al. and Wray are focused on collaboration in science, concerns about the eclipse of authorship also arise for on-line sources like Wikipedia. As someone who reads philosophy reviews on-line, you probably consult Wikipedia on a regular basis. Search for a term, and the Wikipedia article is likely to be among the first results. Yet the article might have been edited by anyone, and there is no single justificatory story to be told. The standard defense of Wikipedia is that, given time, someone in the community of users will correct errors. Similar things are said about the scientific community. Yet the claim of these papers is that the conditions of large-scale collaboration in science stymie such self-correction. Huebner et al. argue that the far-flung nature of multi-site clinical trials make them "inherently unreproducible" (p. 103). Wray argues that practices like refereeing are strained by such collaboration. Yet it is precisely mechanisms like reproducibility and peer review which are supposed to make the scientific community error-correcting. So the problems with recent large-scale collaboration really are something new.
The final two essays are concerned with methods for aggregating group opinion. The group here will typically be larger than even the coauthors in a radical collaboration. For example, we might ask about what the opinion of a whole scientific discipline is about some conjecture. Carlo Martini and Jan Sprenger review recent work in formal judgment aggregation. These methods take the beliefs of all the community members as inputs and calculate the group belief or aggregate opinion as an output. Martini and Sprenger discuss and contrast egalitarian models (in which the opinion of each community member is given an equal weight) and differential models (in which group members might count for more or less). Differential models are to be preferred precisely when we can distinguish which group members to count as experts.
Denis Bonnay's contribution builds on the kinds of approaches discussed by Martini and Sprenger. For some groups, it will not make sense to attribute a single group belief or answer. Heterogeneous groups should instead be understood in terms of clusters of different opinions. For such groups, Bonnay suggests, there should be clustering methods for determining the subgroups to which we can attribute group beliefs or answers. Whereas a judgment aggregation procedure will deliver an answer at the level of a whole group, a clustering procedure would identify factions with different aggregate judgments. Bonnay sketches some constraints on what a clustering method should look like.
To return to considering the anthology as a whole, the editors are to be commended for collecting a highly focused, original, and engaging volume. All of the essays address the topic in edistinctive ways, and I would be hard pressed to pick any as especially stronger than the others.
Philosophers of science and social epistemologists will find this collection highly rewarding. The corollary of this focus is that readers with little interest in formal social epistemology or empirically informed philosophy of science will, for just that reason, find little of interest in this excellent collection. Anyone interested in contemporary science ought to be interested in those things, but I have no space left to argue the point.