Silvana studied a bachelor’s in Philosophy & Economics at Bayreuth University. She now finishes her second bachelor’s in business administration at the same university before pursuing philosophy further in her master studies.
Summary of thesis
This thesis addresses morally conscientious agents, who want to avoid committing moral wrongs under normative uncertainty. Besides theoretical discussions about what the correct metanormative theory is, practical problems have not been addressed sufficiently. I assume Maximizing Expected Choice-Worthiness to be the correct decision-making approach under normative uncertainty for interval-scale measurable and intertheoretically comparable first-order moral theories, asking the question how we can improve decision making under the possible influence of cognitive biases on metanormative judgments. A few pathways for the influence of biases on metanormative decisions are identified, however, this thesis solely addresses the distribution of subjective probabilities (credence) across first-order normative theories.
After an argument for the relevance of cognitive biases in metanormative thinking this thesis follows a three-step approach towards improving decision-making, as a necessary means to avoiding moral wrongs. First, a preliminary normative model of what good thinking under normative uncertainty requires is developed, consisting of a few rules. In the descriptive part it will be shown that availability and status quo thinking lead to biases of which we have reason to believe also exist in metanormative judgments. In the prescriptive part, the debiasing process is sketched and possible starting points are suggested, concluding that research needs to provide descriptive insights and highlighting the relevance of cognitive biases for practical philosophy.
Why is this important
All decisions about how to best distribute resources are affected by which normative views one holds. Moral uncertainty is, therefore, a worthwhile topic underlying all EA thought. My thesis addresses the practical problem that one may be biased when choosing the normative criteria by which one evaluates options in order to improve decision-making. This is a relevant consideration and I believe worth exploring, if moral uncertainty is to be taken seriously and if we want to apply procedures like Maximizing Expected Choice-Worthiness successfully in practice. So, even if there is a lot more theoretical research to do, my thesis aims to offer an early practical perspective of what might go wrong even with the best intentions.
Strengths of this thesis include:
The presentation of an original argument. In the very introduction of their new book “Moral Uncertainty” (2020) William MacAskill, Krister Bykvist and Toby Ord point to biases in one sentence. So maybe thought has gone in that direction already, however, I haven’t seen an analysis like mine, clearly pointing out the relevance of cognitive biases for successful metanormative thinking.
Part 2 on cognitive biases in metanormative judgments contains some generally relevant thoughts and part 4 offers the analysis of where availability and status quo thinking may influence our thinking about the plausibility of first-order normative theories. I think both are interesting starting points for people going deeper into this topic.
Weaknesses of this thesis include:
The attempt to cover too much ground. I wanted to make the case for biases in metanormative thinking in general AND present the debiasing process, which includes a normative model of how we should think about such issues, a descriptive part about how we think and a prescriptive part on how to close that gap. There is a lot in there, however, not always in great depth. For example, the normative model is very basic, as I didn’t put my focus on it.
It is not an easy read for someone not familiar with the literature on moral uncertainty. The topic is complex and so is the language. I did not make it easy for uninformed readers to follow by not using many examples and by not giving a long introduction into why we should be morally uncertain and why we should maximize expected choice-worthiness. (This allowed me to cover more of my own thoughts without reproducing much of the existing literature.)
It rests on the assumption that biases are not working in our favour. But it may be the case that we are biased in ways that lead to better outcomes. I deal with this shortly, arguing that debiasing is the better general strategy, but I might be wrong here.
In which ways I have changed my mind since I finished writing my thesis
I focused on one particular way in which biases can influence metanormative thinking: by influencing how we distribute credence across first-order normative theories. I believed that this is the most important pathway because it directly affects the outcome. If I was to write this thesis again, I would maybe spend more time discussing whether biases in other metanormative processes, for example on higher levels or biases playing in when we want to apply normative principles, are even more important.
Recommendation based on my experience
If you want to do something in this direction on a theoretical level: Start by reading “Moral Uncertainty” (2020) by MacAskill, Bykvist, Ord. It captures most of the relevant literature in the field. They are presenting the current solutions to for example how to compare values of different theories, when comparing them (Is some action worse according to Utilitarianism or to Kantianism?). The practical side is also covered. There’s, for example, a chapter on the value of new information and on the diverse implications that imposing moral uncertainty has on practical ethics, which seems to be more complex than thought.
What would be really interesting is empirical research on how we actually make decisions when we think about the plausibility of first-order normative theories. Does the fact that I just took a very cool seminar on Kantianism really bias me in that direction? If all my friends are fans of utilitarianism, am I more likely to judge it as more likely, simply because other people do and not because of some superior property of utilitarianism? If someone does something like this, please let me know, I am very interested!
If you read my thesis and you have thoughts to share, I’d be very happy to hear them! I am thinking about pursuing this and related topics further in my master, so I’d appreciate any input. Also, feel free to reach out to me if you chose a similar topic and you feel like I could help you in any way.