How should the possibility of AI sentience guide the development of AI and related institutions and norms?

This profile is tailored towards students studying biological sciences, computer science, history, law, philosophy, psychology and sociology, however we expect there to be valuable open research questions that could be pursued by students in other disciplines.

Why is this a pressing problem?

AI systems are becoming increasingly powerful. They can currently outperform humans in many narrow domains (for example beating the best human players at a number of games and predicting the structure of proteins) and their capabilities are increasing quickly. This raises important questions about how AI should be developed and governed in order to safeguard the wellbeing of humans and nonhuman animals, as these systems could increasingly pose a serious risk, regardless of whether they are ever sentient. But if we care about the welfare of other beings – even if those beings are very different from us – it’s also important to explore whether sentience could emerge in future AI systems and how to respond to this possibility.

It’s currently far from clear that AI systems cannot be sentient. For instance, the largest survey of professional philosophers, last conducted in 2020, found that 50% of all surveyed philosophers of mind believed or leant towards thinking some future AI systems would have conscious experiences. However, based on our current knowledge, we risk creating conscious AI without realising it’s having conscious experiences. Philosopher Robert Long writes, ‘we don’t yet know what conditions would need to be satisfied to ensure AI systems aren’t suffering, or what this would require in architectural and computational terms.’

If sentience did emerge in AI systems, would this be a problem? An important factor to consider is just how many digital minds there might be in the future. Providing the necessary hardware is available, software can be replicated much more rapidly than biological systems. We could therefore be moving towards a future in which many, or even most, of the moral patients that exist in the world are digital. Failing to treat these minds as moral patients could represent a catastrophe even larger in scale than that of the billions of animals currently in factory farms.

There are a number of reasons why we might expect that if digital minds emerge, their welfare will not be considered important. The Sentience Institute lists reasons including humanity’s history of exploiting and neglecting to help other beings; the widespread existence of speciesism (the tendency to care less about other beings purely because they are of a different species), and scope insensitivity (failing to adequately account for the scale of problems).

The topic of AI sentience raises many questions on which relatively little research has been done. Could AI systems become sentient? How can we steer the development of AI to reduce the chance AI systems suffer? What signs would indicate that they are suffering? Should we try to avoid creating sentient AI? What features (if any) other than sentience could be sufficient for AI systems to count as moral patients? What are the current attitudes to AI welfare, how these might evolve, and what institutions and norms could protect the rights of sentient AI systems?

In the podcast below, philosopher Thomas Metzinger discusses whether we should advocate for a moratorium on the development of artificial sentience.

  • Description text goes here
  • Description text goes here
  • Description text goes here

Contributors: This profile was published 3/12/2022 and last updated 25/05/2023. Thanks to Professor Jonathan Simon, Janet Pauketat and Leonard Dung for their helpful feedback. All errors remain our own. Learn more about how we create our profiles.

Previous
Previous

Insect welfare in farmed, wild, and research contexts

Next
Next

Antimicrobial resistance