Foundational ethical questionsDeveloping and cooperating between different value systems

Interested in working on this research direction? Apply for our coaching



Apply for coaching

This profile is tailored towards students studying philosophy, however we expect there to be valuable open research questions that could be pursued by students in other disciplines. 

 

Why is this a pressing problem? 

We want to make the world a better place, but what does this mean? When acting to try to improve the world we have to make decisions about what kinds of outcomes are most desirable, for example, when should we prioritise alleviating suffering or increasing wellbeing?; to what extent can good and bad experiences can balance each other out?; and how should we respond to moral uncertainty – i.e. how should we cooperate between different ethical views?

See the video below for a discussion of moral uncertainty – the question of how to act when uncertain about what moral theory is right. 

https://youtu.be/_wLfNFDQSyw

 

How to tackle this

Below are some broad questions from the Global Priorities Institute research agenda and the research agenda of the Center for Reducing Suffering. Further work would be necessary to identify a sufficiently narrow research question if you want to pursue research in this area.

Philosophy

Some of the questions in the Global Priorities Institute‘s research agenda are below:

  • Assess the expected value of the continued existence of humanity. Might this expected value be negative, or just unclear? How do our answers to these questions vary if we (i) assume utilitarianism; assume a non-utilitarian axiology; fully take axiological uncertainty into account?
  • Social welfare criteria that are used to compare states that differ in population size typically specify a critical welfare level at which lives that are added to the population have zero contributive value to social welfare. What kinds of lives have zero contributive value in this sense?
  • Are there instrumental goals on which competing axiologies converge? Given axiological uncertainty, can we make any claims about what sort of future we should try to aim for
  • How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?

Some of the questions in the Center for Reducing Suffering’s research agenda are below:

  • How can concern for suffering be combined with other values — such as fairness considerations, respecting individuals’ consent, or deontological side-constraints — to avoid counterintuitive implications? See e.g. the pluralist suffering-focused views of Clark Wolf and Jamie Mayerfeld.
  • Can the concept of psychological bearability or unbearability help provide a plausible account of value lexicality? How does this relate to the intensity, duration, or other aspects of an experience? 
  • What are the best arguments against the Asymmetry in population ethics, and what might be the most plausible replies to these arguments?
  • A number of consequentialist views, such as classical utilitarianism and some versions of prioritarianism, imply the Very Repugnant Conclusion in population ethics. What arguments can be made for and against this conclusion? 
  • Can suffering-focused population ethics be a viable solution to the well-known problems and impossibility theorems of the field? What problems arise in a suffering-focused account, and how could a proponent respond?

 

Who is already working on this? 

Research in this area

Organisations working in this area