Evaluating Strong Longtermism
Karri is currently a PhD student in Philosophy at University College London, as well as a Global Priorities Fellow of the Forethought Foundation. His work revolves around interpersonal aggregation, population ethics, and theories of intergenerational justice. He is also interested in some applied issues, including animal ethics and, unsurprisingly, issues related to effective altruism. Outside of philosophy, he co-founded and used to act as a Trustee of Open Cages UK, and EA-aligned animal advocacy group.
What was your thesis topic?
My thesis asks whether we should accept an ethical view, well-known to most people in the EA community, called strong longtermism (Greaves and MacAskill 2021). I discuss what I take to be three particularly pressing objections to this view: anti-aggregative moral views, the procreation asymmetry, and decision-theoretical fanaticism. I argue that the first two in particular pose a serious problem to the standard case for strong longtermism. However, in the final chapter of the thesis, I also very tentatively suggest that these objections could be overcome by framing strong longtermism as a public philosophy. This means that the view should be understood as guiding our institutions and public actors, rather than as a blueprint for individual morality.
What do you think the stronger and weaker parts of your research are?
A version of the second chapter of my thesis, namely the one on strong longtermism and anti-aggregative moral views, won the prize for the best philosophy paper written by a graduate student in GPI’s 2022 Prizes in Global Priorities Research, and it has since been published as a working paper on their website.
On the weaker side, I think that my fifth chapter, namely the one on strong longtermism as a public philosophy, would still need a lot of development – while I think the suggestion I put forward is interesting, there are some big objections that would need to be ironed out to make the argument compelling overall. I hope to think about this more in the future.
What recommendations would you make to others interested in taking a similar direction with their research?
I find it difficult to give much advice on writing theses in general, as what you should do can vary wildly depending on your field of study, the program you are on, your future goals, and specific factors relating to progression in your (academic) career. The one piece of advice that I have found almost universally useful for writing philosophy, however, is to start writing directly from what you think is the most interesting point you have. In other words, avoid spending days on perfecting your introduction or trying to map every possible objection – instead, just write out your core thought and let the piece grow out from that. Philosophy papers typically require very many rounds of revisions, so you will have more than enough time to iron out that introduction. Starting from the middle, so to say, allows you to get the ball moving, and brings with it further ideas. In the case of my thesis, this first thought was the tension between anti-aggregative moral views and strong longtermism, which grew out to be what I think is the best chapter of my thesis.
In what ways have you changed your mind since you finished writing it?
When writing my thesis, I argued that fanaticism is not as big a problem for strong longtermism as it is often taken to be. This is because those in favour of strong longtermism can still generate a meaningful version of the view by accepting a decision rule called tail discounting, which avoids fanaticism. While tail discounting has some counterintuitive implications, Thomas and Beckstead (2021) show that any decision rule that avoids fanaticism will have to bite some bullets, and I have been inclined to think that the problems with tail discounting are not particularly bad in this regard. However, since writing the thesis, I have learned that I overlooked some problems with this view, the most important being that tail discounting leaves one vulnerable to money pumps. This makes tail discounting less plausible than I initially thought.