Schedule correct as of 12th October 2018.
Joseph Chappa. Reasons Not to Kill in War. (30min)
Abstract: Recent just war literature appropriately focuses on liability. But there has been insufficient focus on the objective badness of death. I argue that the objective moral badness of death obtains even when the person killed is liable to be killed. If this is right, it means that even just soldiers who justifiably target unjust soldiers who are liable to be killed face competing moral reasons. The objective moral badness of death is a reason not to kill even if the good to be obtained in defeating the enemy is a countervailing reason to kill. This tension is at the heart of what it means justifiably to kill in war and helps to define the role of the combatant. Moreover, it might help to explain a puzzle that has arisen in recent moral psychology literature. Psychologists have defined moral injury as a psychological response to the commission or observation of actions that transgress one’s deeply held sense of morality. Yet some just combatants experience moral injury after justifiable acts of killing. How can one suffer moral injury if one has done nothing immoral? The objective badness of death can account for this psychological phenomenon better than those focused solely on liability.
Johannes Fankhauser. Is the world just wavefunction? Determinism and the metaphysics of hidden variables in quantum mechanics. (60min)
Click here to download the paper* Only available till 17th October 2018.
*This is a draft. Please focus on Chapter 2.
Abstract: We shall discuss a few ideas concerning determinism in quantum mechanics and the metaphysics of “hidden variables”. Given the assumption that quantum mechanics is true, i.e. the theory predicts the correct statistics for the outcomes of experiments, it turns out that any hidden variable model that determines outcomes exactly cannot be verified by experiment or observation. That is, no realistic theory of quantum mechanics provides a verifiable ontology. As a result, the hidden variables can be arbitrarily chosen and their dynamics is not unique. Since realist variables are inaccessible, any indeterministic model would serve the ontological commitment equally well. Hence, determinism cannot be known to be true and this challenges the idea that realist interpretations advance our current understanding of quantum mechanics.
Chiara Martini. Think Small: An Interpretation of The Epicurean Theory of Minima. (60min)
Abstract: This paper provides an original interpretation of the Epicurean doctrine of minimae partes, or ‘minima’, as it is presented in the Letter to Herodotus, 56-59. I try to take seriously the idea that the parts in question are supposed to be minima, hence the smallest possible, at the same time avoiding counterintuitive conclusions, such as the existence of spatially extended junks of matter without shape.
The paper is divided in two parts. First, I analyse the relevant passages of the Letter to Herodotus. Then, I suggest an interpretation of the notions of part and division which leads to a plausible interpretation of the notion of minima.
The key step of my analysis is the refinement of Furley’s distinction between physical and theoretical division, which is not fine-grained enough. By focusing both on the outcome of the division, and on the way in which it is carried out, I am able to characterise a very specific kind of parts, which I call ‘functional parts’. I believe that this notion can provide a coherent and intuitive account of Epicurean minima
James Matharu. Describing Objects: Four Distinctions in Everyday Language. (60min)
Abstract: I briefly summarize a reading of G.E.M. Anscombe’s account of the objects of thought, existent and non-existent. I then focus on arguing that there are four broad kinds of object description. Elucidating the distinctions move us toward solving a puzzle that arises from my reading. The arguments and project are a piece of ordinary language philosophy, or what might be called ‘descriptive metaphysics’ after Peter Strawson. So I am working directly from how we actually speak about objects and thought, and not from inside of a particular theory.
Nathan Cofnas. Power in Cultural Evolution and the Spread of Prosocial Norms. (60min)
Abstract: According to “debunking arguments,” our moral beliefs are explained by evolutionary and cultural processes that are not truth tracking. Therefore (the debunkers say) our moral beliefs have no justification and we ought to be skeptics about moral realism. Huemer counters that “moral progress”—the alleged cross-cultural convergence on liberalism—cannot be explained by debunking arguments. According to him, the best explanation for the worldwide trend toward liberalism is that people have come to recognize the objective correctness of liberalism. The present paper argues, contra Huemer, that the trend toward liberalism is susceptible to a debunking explanation. The trend is driven by two related, non-truth-tracking processes. First, large numbers of people gravitate to liberal values for reasons of self-interest. Second, as societies become more prosperous and advanced, they become more effective at suppressing violence, and they create conditions where people are more likely to empathize with outgroup members. The latter process is not truth tracking (or so the present paper argues) since our aversion to violence and our tendency to empathize with others—at least under certain conditions—are themselves susceptible to debunking explanations. Liberalism is what Sperber calls a “cultural attractor”—a set of cultural norms that are attractive and stable. Because of historical accidents, Western Europe, and many cultures under Western European influence, have moved toward this attractor. Other cultures have settled on illiberal attractor positions, and are not converging on liberalism at all.
Robin Solberg. How to interpret mathematical modality? (30min)
Abstract: Potentialist views in the philosophy of mathematics has it that, at least certain, mathematical objects have some kind of potential existence. So, for example, Linnebo (2010, 2013) has argued that there is no definite height to the cumulative hierarchy, also known as the universe of sets, as he thinks it can always be potentially extended in height. He believes that necessarily for any plurality of sets xx, there possibly is a set y such that y is not one of the xx (in particular, we could take y to be the set containing the plurality xx). A question that immediately arises is how to interpret the modal terms in such claims. I wish to explore and puzzle over some of the different options for interpreting this modality. Is it just metaphysical possibility? But doesn’t every pure set necessarily exist in the metaphysical sense? Is there some other sui generis notion of mathematical modality that we can use instead?
Matt Hewson. Epistemic utility theory and normative uncertainty. (30min)
Abstract: Some moral theorists worry about what we should do when we don’t know which moral theory is true. They think that what we should do depends, at least in part, on which moral theories we think might be true. On a pretty natural way of setting things up, this motivation can be extended to the epistemic domain. As a case study, I look at what happens when we admit normative uncertainty into the framework of epistemic utility theory. Epistemic utility theory gives us a formal way of adjudicating the epistemic goodness of various credence functions, using so-called `scoring rules’.
Thinking about things this way has a couple of interesting implications. First, it becomes less clear that a well known objection to epistemic utility theory — the Bronfman objection — goes through. That’s not to say it doesn’t go through, just that whether it does depends on the correct account of normative uncertainty. The second interesting implication is for moral uncertainty. Applying the most popular account of reasoning under moral uncertainty to the epistemic case threatens to break the picture of reasoning moral uncertaintists were hoping to secure in the first place.
Sam Clarke. Don’t Fail the Module! Or: why perception is (still) modular. (60min)
Abstract: Is perception modular? Fodor famously thought so – he deemed informational encapsulation the essence of a system’s modularity and argued that perceptual systems are modular in precisely this respect. Nowadays, few endorse his proposal. In large part, this is due to a considerable body of work that has been seen to evince the cognitive penetration of perceptual processing. Since cognitive penetration is typically deemed inconsistent with encapsulation this threatens to refute the Fodorian modularity of perception. Here, I question the inconsistency. I argue that cognitive penetration does not imply unencapsulation in any straightforward way. Moreover, I propose that certain phenomena – likely to evince cognitive penetration – simultaneously provide evidence in favour of perceptual systems’ encapsulation. This defuses a prominent objection to the idea that perception is modular, and provides novel reason to think it is.