Interested in working on this research direction? Apply for our coaching
This profile is tailored towards students studying economics and philosophy, however we expect there to be valuable open research questions that could be pursued by students in other disciplines.
Why is this a pressing problem?
How can we allocate limited resources to do as much good as possible? Are there global problems that haven’t yet been identified? What are the crucial considerations that could radically change our understanding of what it means to improve the world? And what methodologies should we use to answer these questions?
Research that contributes to answering these questions could make efforts to improve the world much more effective. Global priorities research is a relatively new, interdisciplinary field focused on doing this. It draws on a wide range of disciplines, including economics, philosophy, history and psychology, and encompasses a spectrum from more foundational to more applied research.
Foundational global priorities research explores the high level questions raised by the aim of doing the most good possible, and tries to develop methodologies to answer these questions. For example, research could explore how best to account for the long-term and indirect effects of our actions, or whether we are living at a highly influential time in history (which would make the actions we take today particularly important from the perspective of improving the future).
More applied research draws on the understanding developed by foundational research to assess how global problems should be prioritised, depending on their severity and how promising the possible interventions seem. This profile focuses on more foundational research questions, however you could do applied research aimed at quantifying the importance of the problems featured in many of our other profiles and estimating how promising interventions are.
You can read a longer introduction to this research direction from 80000 Hours here and see here for their update on why this research area seems particularly high priority.
Watch William MacAskill’s introduction to the goals and research of the Global Priorities Institute below.
Explore existing research
- Aschenbrenner, Leopold (2019) Existential Risk and Growth
- Buchak, Lara (2022) How Should Risk and Ambiguity Affect Our Charitable Giving?
- Greaves, Hilary & Toby Ord (2017) Moral Uncertainty about Population Axiology, Journal of Ethics and Social Philosophy
- Greaves, Hilary & William MacAskill, The Case for Strong Longtermism
- MacAskill, William (2020) Are We Living at the Hinge of History?
- Mogensen, Andreas (2022) Respect for Others’ Risk Attitudes and the Long-Run Future
- Trammell, Philip (2021) Dynamic Public Good Provision under Time Preference Heterogeneity: Theory and Applications to Philanthropy
- Wilkinson, Hayden (2022) Market Harms and Market Benefits, Philosophy and Public Affairs
You could also explore the research done at the Global Priorities Institute. The Global priorities research for economists syllabus also has many suggestions for further reading.
Focused more on foundational global priorities research:
- The Global Priorities Institute
- The Center for Reducing Suffering
- Center on Long-Term Risk
- NYU Mind, Ethics and Policy Program
- UT-Austin Population Wellbeing Initiative
- Effective Altruism Psychology Lab
Focused more on applied global priorities research:
Find a thesis topic
If you’re interested in working on this research direction, below are some ideas on what would be valuable to explore further. You can also explore the profiles at the end of this page, for deeper dives into particular areas of global priorities research.
For a range of open research questions in this area, see the Global Priorities Institute’s research agenda. The Happier Lives Institute’s research agenda also includes both foundational and applied global priorities research questions through the lens of wellbeing.
If you want help refining your research ideas, apply for our coaching!
There are a number of themes in the Global Priorities Institute’s research agenda. Some examples:
- “A catastrophic risk can be called ‘existential’ to the extent that it threatens a large, permanent negative shock to the subsequent growth path. An even more precise characterisation of this property may be valuable. How can we best model the magnitude of the permanent costs associated with a given risk? (Ord forthcoming)
- “…Much government policy, economic research, and philanthropic activity is intended ultimately to increase the general rate of economic growth. Economic growth could be extremely beneficial, from a long-term perspective, as it promises to improve the entire course of the future. However technology-driven growth may raise existential risks, due for example to nuclear accidents, engineered pandemics or artificial superintelligence (INFORMAL: Yudkowsky 2013), and growth in general may have other negative effects (for instance, risks to human life (Jones 2016), climate change (IPCC 2014), or meat consumption (INFORMAL: Bogosian 2015)). How radically do these drawbacks render growth an imperfect proxy for expected long-term wellbeing? Is the correlation between consumption growth and long-term wellbeing even positive, given the current drivers of growth, from a geographical, sectoral and technological perspective? (Friedman 2006; Cowen 2007; Tomasik 2013; Cowen 2018) (INFORMAL: Beckstead 2014) ”
- “How should we adapt key economic models to account for altruistic individuals with other-regarding preferences (Bergstrom 2002, Sobel 2005)? Under what assumptions do key results, such as the Fundamental Theorems of Welfare Economics, still hold (Schall 1972; Pollack 1976; Rotemberg 2003)? In cases that they do not, can analogous results be derived?”
See also the research directions suggested by the Forethought Foundation.
There are a number of possible research directions listed in the Global Priorities Institute’s research agenda. Some examples:
- Is longtermism true? That is, is the primary determinant of what the best actions are to take today the effect of those actions on the very long-term future?
- “Does longtermism presuppose some particular, controversial population axiology, such as total utilitarianism? Or might longtermism be robustly supported across a range of minimally plausible population axiologies? If so, do different axiologies support different conclusions about intra-longtermist prioritisations (Beckstead 2013; Greaves and MacAskill 2019; Thomas 2019; Mogensen 2020b; Tarsney and Thomas 2020)?”
- “It is natural to think that in evaluating interventions, we should take into account all welfare-relevant effects of those interventions, not only those that are intended or direct. The argument that we should value indirect effects seems in tension with the view, held by some philosophers, that when deciding whom to aid, we are generally morally constrained to consider only the direct impact of our actions on those we can help, as opposed to the indirect utility of helping some rather than others (Kamm 1993; Brock 2003; Lippert-Rasmussen and Lauridsen 2010; Du Toit and Millum 2016). How is this tension best resolved (Mogensen 2020a) (INFORMAL: Greaves 2015)?”
- Should altruists be more concerned about bringing about the best possible outcomes than about ensuring that the worst possible outcomes don’t occur? “This could be because the costs of the worst outcomes are greater than the benefits of the best outcomes, because avoidance of the worst outcomes is more neglected, or because the worst outcomes should be given more weight than the best outcomes (Hurka 2010) (INFORMAL: Althaus and Gloor 2018; Tomasik 2018)? If so, what activities would be best? (MacAskill MS) (INFORMAL: Gloor 2018)”
- “To what extent should we be ‘risk averse’ in our approach to doing good, and what are the implications of risk aversion for how we should prioritise among charitable causes? (Quiggin 1982; Buchak 2013; Greaves et al. MS)”
- Given a decision between 1) a finite increase in value with certainty and 2) an arbitrarily large increase in value with very low probability, which option does the correct decision theory recommend? Are there plausible decision theories by which we can prefer (1) without encountering absurd implications? (Beckstead & Thomas 2021; Russell 2021; Wilkinson 2022)
- “Forecasting the long-term effects of our actions often requires us to make difficult comparisons between complex and messy bodies of competing evidence, a situation Greaves (2016) calls “complex cluelessness”. We must also reckon with our own incomplete awareness, that is, the likelihood that the long-run future will be shaped by events we’ve never considered and perhaps can’t fully imagine. What is the appropriate response to this sort of epistemic situation? For instance, does rationality require us to adopt precise subjective probabilities concerning the very-long-run effects of our actions, imprecise probabilities (and if so, how imprecise?), or some other sort of doxastic state entirely?”
- “Often it seems that subtle differences in epistemology would lead one to quite different conclusions concerning which interventions have the highest expected impartial value. These include differences in responses to paucity of hard evidence, in level of trust in abstract arguments leading to counterintuitive conclusions, in responses to interpersonal disagreement, and in the relative weight placed on different types of evidence. To what extent should this lack of robustness move us away from simply maximising expected value with respect to whatever credences we happen (now) to have? Is there a plausible alternative approach? (INFORMAL: Karnofsky 2016)”
See also the research directions suggested by the Forethought Foundation.
- Some worry that for a small number of powerful individuals to direct large amounts of money through philanthropy is undemocratic and elitist, even when they target their philanthropy in highly effective ways (Lechterman 2019; 2021; Reich 2018; also see Matthews 2022). Are such worries justified? And, if so, can social and political structures be devised that make philanthropy more democratic and inclusive while still being highly effective? (Adapted from Longtermist political philosophy: an agenda for future research)
- “How does the adoption of a long-term perspective that rejects a positive rate of pure time preference, shape debates about feasibility, idealisation, and utopianism in political theory (Sen 2009; Lawford-Smith 2013; Estlund 2019)? Over very long timescales, feasibility constraints in politics weaken. Does a long-term perspective therefore support a renewed role for utopian political theorising? Or does it argue against a focus on utopian blueprints, in favour of designing open, exploratory institutions, best able to capitalise on anticipated future improvements in values and information (Gaus 2018; Barrett 2020)?”
- How do long-term considerations affect political morality? For example, if a version of longtermism is true for institutions yet states fail to meet their longtermist obligations, what does that imply for citizens and their political obligations and potential civic duties to effect change?
Philosophy of Science
- “Which current gaps in our knowledge regarding the very long term are particularly action relevant? In which scientific field or other domain could these gaps be closed by an increase in (reliable) long-term forecasts?”
Apply for our coaching and we can connect you directly with researchers and potentially mentors who can help you refine your research ideas. You can also apply to join our community if you’re interested in connecting with other students specifically.
Apply for our database of potential supervisors if you’re looking for formal supervision and take a look at our advice on finding a great supervisor for further ideas.
Our funding database can help you find potential sources of funding if you’re a PhD student interested in this research direction.
Sign up for our newsletter to hear about opportunities such as funding, internships and research roles.
See our introduction to prioritisation research to find other profiles related to the question of how to do as much good as possible.
This profile was published 9/1/2023. Thanks to Hayden Wilkinson and David Thorstad for their helpful feedback. All errors remain our own.