Moral Uncertainty and Value Incomparability: Generalizing the Expected Choice-Worthiness Approach
Rafael Ruiz de Lira
Rafael Ruiz de Lira studied Master's degrees in philosophy and political philosophy. He plans to begin a PhD in future on moral progress across generations and has founded Futurosophia, an organisation focused on building the Effective Altruism community in the Spanish-speaking world.
What did you study and what are your future career plans?
I have a Master’s Degree in Philosophy from King’s College London and a Master’s Degree in Political Philosophy from Pompeu Fabra University (Spain). This was the thesis I wrote for King’s College London.
I hope to start a Ph.D. in Philosophy soon. The Ph.D. thesis is likely to be on the topic of moral progress across generations (the inclusivist expanding circle of moral consideration that philosophers such as Peter Singer have argued for), which I hope to turn into a book that should be of interest to many Effective Altruists.
Since then, I have founded an Effective Altruist organization called Futurosophia (www.futurosophia.com), which aims at Effective Altruism community building in the Spanish-speaking world. In the long term, I would like to grow our organization, become a researcher in philosophy (in the fields of ethics and political philosophy), and work for an EA-aligned think tank.
What do you think the stronger and weaker parts of your thesis are?
The stronger parts include a clear and brief presentation of the topic of Moral and Normative Uncertainty. It also defends a particular view on the topic of value incomparability across theories, which can be called the universal scale account.
Its main weakness is the reliance on external literature to criticize one of its main theoretical adversaries: the use of Voting Rules (particularly, the Borda Voting Rule) that William MacAskill defends in his Ph.D. dissertation and 2020 book. In particular, I rely a lot on the work that Christian Tarsney also published at around the same time as I was developing my criticisms. I also rely on some formalizations that Stefan Riedner did in his Ph.D. thesis.
In what ways have you changed your mind since you finished writing your thesis?
I still hold roughly the same beliefs on the topic of Moral Uncertainty, though I have come to believe that a convincing answer for value incomparability that solves the big problems and allows value comparability in a straightforward way is extremely difficult to develop, if not impossible. This might perhaps explain why some people are moving away from this topic. The field of Moral Uncertainty is very complex and still needs much more work, but not a lot of people have the high background requirements that are needed to make progress on the topic.
In what ways do you think your topic improves the world?
Normative fields, such as ethics and epistemology, are extremely difficult to figure out. They also dictate what our actions should be, both in daily situations and in difficult moral dilemmas. Philosophers have been disagreeing about morality for thousands of years, and many key questions are likely to remain open for thousands of years more. For this reason, it is good to make moral choices that many theories uphold, instead of wholly relying on one moral theory that might turn out to be false. It is good practice to try to avoid taking too many moral risks. Some examples that this metanormative theory allows you to argue for include: upholding animal rights and the rights of future generations even if a person doesn’t believe in consequentialism, because consequentialism says that harm is so great that the risk should be avoided. So every rational agent that thinks that consequentialism might be true should be vegan and protect the interests of future generations (e.g. advocating to combat climate change, protecting us from catastrophic and existential risks, etc.).
However, spelling out the metanormative idea of what this means philosophically and mathematically for the whole of normative ethics is extremely difficult. My thesis makes a contribution regarding the issue of what to do with very different theories (for example, forms of deontology and of consequentialism), which is the hardest version of this problem.
What recommendations would you make to others interested in taking a similar direction with their research?
For this particular topic, I recommend an advanced background in both philosophy and mathematics (or something that uses decision theory, like economics), such as a bachelor’s degree in both fields. Particularly, advanced knowledge of both normative ethics and decision theory is required. Some knowledge of metaethics can also be useful.