Why work on reducing existential and catastrophic risks?
Existential and global catastrophic risk research focuses on understanding and mitigating the likelihood of events that could damage well-being on a global scale, potentially even causing the extinction of humanity or a drastic and terminal curbing of its potential.
Prioritising reducing these risks may sometimes be informed by an ethical stance known as longtermism. Longtermism states that because the likely number of our descendents and the potential for their flourishing in the aeons to come is so astronomically high, we must strongly prioritise not squandering it.
Until recently, existential threats have mainly come from the natural world – supervolcanoes, asteroids, or naturally-occuring pandemics. These are still comparatively very impactful areas to research.
Currently, the greatest threats are likely ones we might bring about ourselves – ‘anthropogenic’ risks. Nuclear war and an ensuing nuclear winter or particularly severe climate change scenarios are some of the most salient examples, and excellent areas for research. However it seems the very greatest threats likely come from engineered pandemics and the possibility of advanced artificial intelligence being developed which is misaligned with human values. Given the huge commercial incentives on the development of the relevant technologies, the impossibility of perfect global regulation, and their potential necessity in solving other major problems, hoping to just prevent all progress in these fields seems unrealistic. We need our best minds at the table shaping their trajectory, which as it stands is a pretty grim one. In Toby Ord’s 2020 analysis, these two risks sharply increased the aggregated risk that we suffer an existential or catastrophic event in the next century to a sobering 1 in 6.
So some of our greatest threats are these nascent and quickly-developing technologies about which there is still so much to learn – clearly, this places huge importance on thoughtful research. And this may in fact be the most significant time to be doing it. Being new, there are many technical questions to resolve regarding these technologies, and the social norms and policy surrounding them are still early-stage and malleable.
Here are some steps that can help you do research in this area.
Explore our research direction profiles below this introduction for some deeper dives into how research could help reduce the likelihood of existential and catastrophic risks.
- This syllabus on existential risk from Toby Ord contains many options for further reading.
- If you’re interested in learning more about existential risk, you could also apply for these introductory programmes run by the Cambridge Existential Risk Initiative.
Some research agendas that include research questions relevant to tackling a broad range of existential and catastrophic risks include:
- This research agenda from the Legal Priorities Project.
- 80,000 Hours’ list of longtermist policy ideas.
- The research priorities from the Forethought Foundation.
Explore our research direction profiles below for many more questions.
Apply for our coaching for personalised guidance on getting started in this area and to be connected with researchers who can help you refine your ideas.
You can also join our community if you’re interested in connecting with other students specifically.
See our funding database for some potential sources of funding in this area.
Sign up for our newsletter to hear about opportunities such as funding, internships and research roles.
This introduction was last updated 21/01/2023. Thanks to Will Fenning for originally writing this introduction.
Explore recommended research directions in this category
Search for profiles that are tailored specifically to your degree discipline using the menu below.
If you are interested in a profile that isn’t listed under your discipline, we still encourage you to explore it if you think you could make progress in this direction. You can also explore all our recommended directions organised by theme.