Research to reduce the risk of astronomical suffering in the future

Why is this a pressing problem?

‘Suffering risks’ or ‘s-risks’ are occurrences that could result in ‘astronomical suffering,’ as technological advancements potentially lead to space colonisation. While s-risks are therefore speculative, there are various reasons – based on historical events and present-day developments – to believe they could come about in future.

Firstly, new technologies – such as artificial general intelligence or even technologies we haven’t anticipated – could increase the chance of unprecedented suffering. New technologies could concentrate and even indefinitely consolidate power in the hands of those who develop and control them, which could lead to situations such as leaders high in sadism or psychopathy being much more capable of remaining in power. AI systems could also cause vast harm in the case of an alignment ‘near-miss,’ lock in values that would allow large amounts of unnecessary suffering (e.g. in animals), and increase the potential downsides from large-scale conflict. Additionally, future AI systems could potentially be sentient and might themselves suffer, which could be a disaster given that there are many reasons to expect that digital minds could come to outnumber biological minds, such as because they may be more efficient and faster to replicate than biological minds.

Secondly, humanity’s history shows that we clearly can’t assume powerful technologies will always be used well; history contains numerous examples of intentional cruelty or lack of interest in the well-being of different groups or animals, as well as examples of technological advancement increasing the scale and severity of pre-existing harms (such as in the case of factory farming).

Finally, there may be far more lives in the future than have ever existed to date, particularly if humanity colonises space. This could mean that many more humans, as well as farmed and wild animals, will have lives that could go well or badly. Another possibility is that space will be colonised with artificial agents. If these artificial agents are sentient, this could be where most future happiness and suffering reside.

As research into how best to prevent s-risks is in its early stages, exploring the plausibility of  suffering-focused ethical views, increasing concern for suffering to build the field, or doing preliminary research on the interventions that look most promising from the perspective of reducing s-risks all seem like useful directions to focus on. Keep reading this introduction for ideas of research questions you could pursue in these areas.

You could also do research on more specific problems that could be promising to work on if your priority is reducing s-risks. See the profiles below this introduction for a range of research directions that are relevant to s-risks, but bear in mind not all questions in these profiles will be promising from the perspective of reducing s-risks. If you want to work on reducing s-risks, we recommend applying for coaching and reaching out to the Center for Reducing Suffering or the Center on Long-Term Risk for guidance on choosing a research question.

Contributors: This introduction was published 19/06/23. Thanks to Anthony DiGiovanni and Winston Oswald-Drummond for helpful feedback on this introduction. All errors remain our own.

Next
Next

Reducing risks from malevolent actors