Preventing S-risks
Research to reduce the risk of astronomical suffering in the future

Interested in working on this research direction? Apply for our coaching



Why is this a pressing problem?

‘Suffering risks’ or ‘s-risks’ are occurrences that could result in ‘astronomical suffering,’ as technological advancements potentially lead to space colonisation. While s-risks are therefore speculative, there are various reasons – based on historical events and present-day developments – to believe they could come about in future.

Firstly, new technologies – such as artificial general intelligence or even technologies we haven’t anticipated – could increase the chance of unprecedented suffering. New technologies could concentrate and even indefinitely consolidate power in the hands of those who develop and control them, which could lead to situations such as leaders high in sadism or psychopathy being much more capable of remaining in power. AI systems could also cause vast harm in the case of an alignment ‘near-miss,’ lock in values that would allow large amounts of unnecessary suffering (e.g. in animals), and increase the potential downsides from large-scale conflict. Additionally, future AI systems could potentially be sentient and might themselves suffer, which could be a disaster given that there are many reasons to expect that digital minds could come to outnumber biological minds, such as because they may be more efficient and faster to replicate than biological minds.

Secondly, humanity’s history shows that we clearly can’t assume powerful technologies will always be used well; history contains numerous examples of intentional cruelty or lack of interest in the well-being of different groups or animals, as well as examples of technological advancement increasing the scale and severity of pre-existing harms (such as in the case of factory farming).

Finally, there may be far more lives in the future than have ever existed to date, particularly if humanity colonises space. This could mean that many more humans, as well as farmed and wild animals, will have lives that could go well or badly. Another possibility is that space will be colonised with artificial agents. If these artificial agents are sentient, this could be where most future happiness and suffering reside.

As research into how best to prevent s-risks is in its early stages, exploring the plausibility of  suffering-focused ethical views, increasing concern for suffering to build the field, or doing preliminary research on the interventions that look most promising from the perspective of reducing s-risks all seem like useful directions to focus on. Keep reading this introduction for ideas of research questions you could pursue in these areas.

You could also do research on more specific problems that could be promising to work on if your priority is reducing s-risks. See the profiles below this introduction for a range of research directions that are relevant to s-risks, but bear in mind not all questions in these profiles will be promising from the perspective of reducing s-risks. If you want to work on reducing s-risks, we recommend applying for coaching and reaching out to the Center for Reducing Suffering or the Center on Long-Term Risk for guidance on choosing a research question.

Next steps

Suffering-focused ethics resources is a good way of getting started if you want to learn more about this value system.

Some research papers and posts you could explore to learn more are:


Books on this topic include:

  • The Center for Reducing Suffering researches the ethical views that might put more weight on s-risks, and considers practical approaches to reducing s-risks.
  • The Center on Long-Term Risk focuses on reducing s-risks that could arise from the development of AI, alongside community-building and grantmaking to support work on the reduction of s-risks.


Many other organisations are working on solving problems that it might make sense to work on if you want to prioritise reducing s-risks. See the profiles we list belowfor further ideas.

As well as exploring the profiles listed below, which go further into specific research directions that you could explore if you’re interested in this area, below are some questions that further research could help answer. If you’re interested in these or similar topics, apply for our coaching for help getting started.

Computer science

  • ‘Our research agenda cooperation, conflict, and transformative artificial intelligence (TAI) is ultimately aimed at reducing risks of conflict among TAI-enabled actors. This means that we need to understand how future AI systems might interact with one another, especially in high-stakes situations. CLR researchers and affiliates are currently researching how the design of future AI systems might determine the prospects for avoiding cooperation failure, using the tools of game theory, machine learning, and other disciplines related to multi-agent systems (MAS). You can find an overview of our work in this area here.’ (Center on Long-term Risk)

Economics

  • How likely is it that we will see explosive economic growth and innovation in the future, akin to the industrial revolution? If so, when would this be likely to happen? How likely is a slowdown or stagnation? (See e.g. here and here.) (Center for Reducing Suffering)
  • How does our ability to influence the long-term future compare to that of future individuals? Is our time uniquely influential, or do we expect better opportunities to arise in the future? (See here for more questions in this area.) (Center for Reducing Suffering)
  • A game theory background is useful for much of the research the Center on Long-Term Risk does – see this research agenda on avoiding conflict between transformative AI systems. You could focus on building your skills in game theory or do research that could contribute to some of these questions now.
 

Philosophy

  • How can concern for suffering be combined with other values — such as fairness considerations, respecting individuals’ consent, or deontological side-constraints — to avoid counterintuitive implications? See e.g. the pluralist suffering-focused views of Clark Wolf and Jamie Mayerfeld. (Center for Reducing Suffering)
  • According to Christoph Fehige’s antifrustrationism, a frustrated preference is bad, but the existence of a satisfied preference is not better than if the preference didn’t exist in the first place. Several authors have objected to antifrustrationism. How could a proponent of antifrustrationism respond? (More.) (Center for Reducing Suffering)
  • ‘Will artificial sentience be autonomous, capable of rational decision-making, or possess other characteristics beyond sentience that might affect (the perception of) moral obligations towards it or its capacity to advocate for its own interests?’ (Prioritization Questions for Artificial Sentience).
 

Political science

  • Are efforts to improve politics a cost-effective intervention from a longtermist perspective (and in terms of s-risks in particular)? Or is this area too intractable, risky, and crowded? (Center for Reducing Suffering)
  • How can we increase the propensity of political decisions to focus on widely shared, cooperative aims, such as the reduction of suffering, rather than getting caught up in political conflict? (Center for Reducing Suffering)
 

Psychology

  • One might argue that many people would give far more priority to suffering if only they were more exposed to it, yet we tend to look away, as it is often unpleasant to consider (severe) suffering. Similarly, studies suggest that people make more sympathetic moral judgments when they experience pain. To what extent is it true that mere (lack of) exposure or attention is a key factor in whether people prioritize the reduction of suffering, as opposed to deeper value differences? (Center for Reducing Suffering)
    • Conversely, what are possible reasons why [people with suffering-focused ethics] might be biased in favor of suffering-focused views?
  • What are the main arguments for and against moral advocacy? (Center for Reducing Suffering)
  • How can we best increase concern for suffering and motivate people to prevent it in cost-effective ways? How can we entrench concern for suffering at the level of our institutions and make its reduction a collective priority?  (Center for Reducing Suffering)
  • Under what circumstances do humans interacting with an artificial agent become convinced that the agent’s commitments are credible (Section 3)? How do humans behave when they believe their AI counterpart’s commitments are credible or not? Are the literatures on trust and artificial agents (e.g., Grodzinsky et al.2011; Coeckelbergh 2012) and automation bias (Mosier et al., 1998; Skitka et al., 1999; Parasuraman and Manzey, 2010) helpful here? (See also Crandall et al. (2018), who develop an algorithm for promoting cooperation between humans and machines.) (Center on Long-Term Risk)
 

Sociology

Our funding database can help you find potential sources of funding if you’re a PhD student interested in this research direction.

  • Sign up for our newsletter to hear about opportunities such as funding, internships and research roles.
  • Sign up to this newsletter for updates and opportunities from the Center for Reducing Suffering.

Contributors

This introduction was published 19/06/23. Thanks to Anthony DiGiovanni and Winston Oswald-Drummond for helpful feedback on this introduction. All errors remain our own.

Subscribe to the Topic Discovery Digest

Subscribe to our Topic Discovery Digest to find thesis topics, tools and resources that can help you significantly improve the world.

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. More info