Prioritising existential and catastrophic risk reductionWhat are some general strategies for mitigating catastrophic risk and how should work on this be prioritised?Interested in working on this research direction? Apply for our coachingApply for coaching
This profile is tailored towards students studying philosophy and economics, however we expect there to be valuable open research questions that could be pursued by students in other disciplines.
Why is this a pressing problem?
Existential risks are events that have the potential to cause human extinction or curtail humanity’s potential such that civilisation never fully recovers. They are a particularly threatening subset of global catastrophic risks; risks that could cause harm on a global scale, for example causing hundreds of millions of people to die. Researchers have identified various potential existential and catastrophic risks and further research to prevent or mitigate them could be very valuable. Potential existential and catastrophic risks include dangerous bioengineered pathogens; misaligned artificial general intelligence; great power war; nuclear winter and natural pandemics. See our profiles listed here for a full list of research directions we recommend in this area.There are also some underlying questions related to existential and catastrophic risks on which further research would be valuable, which could inform the question of how to prioritise efforts to reduce these risks. For example, it could be valuable to explore the ethics of catastrophic and existential risk reduction: how much of a priority does reducing these risks appear to be according to different value systems? Given our limited resources, how should we prioritise efforts to prevent these risks occurring? How can we more accurately model their negative impacts? How should our uncertainty about how good the future will be for our descendants affect our thinking?
How to tackle this
Below are some broad questions from the Global Priorities Institute research agenda and the Forethought Foundation. Further work would be necessary to identify a sufficiently narrow research question if you want to pursue research in this area. See the GPI agenda for references and further questions.
- How can endogenous growth models be adapted to weigh the benefits that growth may pose for the long term against the catastrophic risks that may come with technological development?
- In analogy to the literature on the value of a statistical life, economists researching catastrophe have introduced the concept of the ‘value of a statistical civilisation’, which may be interpreted as the cost of premature human extinction. How should the value of statistical civilisation be estimated and what is its sign and magnitude?
- When confronted with multiple catastrophic risks, what is the optimal allocation of funds between investments aimed at preventing catastrophes, those aimed at mitigating their consequences, and other social investments?
- When an extinction risk is correlated with other extinction risks, the value of averting it is less than when it is anti-correlated. In light of this consideration, and the consideration above, how should funds best be spent to mitigate a portfolio of arbitrary correlated extinction risks? (Forethought Foundation)
In The Precipice, philosopher Toby Ord suggests these as questions further researcher could address (quoted on from the EA Forum):
- Develop better theoretical and practical tools for assessing risks with extremely high stakes that are either unprecedented or thought to have extremely low probability.
- Find the major existential risk factors and security factors—both in terms of absolute size and in the cost-effectiveness of marginal changes.
- Does the idea of option value provide strong reason to prevent human extinction even if there is uncertainty about the social value of the future? What is the chance that future decision-makers will use humanity’s ‘cosmic endowment’ such that current generations would be willing, now, to defer to them?
- What do the most plausible person-affecting views in population ethics say about the value of reducing extinction risk?
- How does expected value of the future, given that the world was marginally spared some extinction event, depend on the extinction event in question? (For example, averting extinction from asteroid impact saves an ‘average’ world, whereas averting extinction from nuclear war only saves a world in which advanced human civilisations would otherwise be prone to nuclear war, which is presumably of lower value.) (The Forethought Foundation)
- Bostrom, N., Existential Risk Prevention as Global Priority
- Ord, T., The Precipice
- Marray, K., Dealing with uncertainty in ethical calculations of existential risk
To learn more you could also look at this post on the ethics of existential risk.Research organisations