AI sentience, moral status and rightsHow should the possibility of AI sentience guide the development of AI and related institutions and norms?
Interested in working on this research direction? Apply for our coaching
This profile is tailored towards students studying biological sciences, computer science, history, law, philosophy, psychology and sociology, however we expect there to be valuable open research questions that could be pursued by students in other disciplines.
Why is this a pressing problem?
AI systems are becoming increasingly powerful. They can currently outperform humans in many narrow domains (for example beating the best human players at a number of games and predicting the structure of proteins) and their capabilities are increasing quickly. This raises important questions about how AI should be developed and governed in order to safeguard the wellbeing of humans and nonhuman animals, as these systems could increasingly pose a serious risk, regardless of whether they are ever sentient. But if we care about the welfare of other beings – even if those beings are very different from us – it’s also important to explore whether sentience could emerge in future AI systems and how to respond to this possibility.
It’s currently far from clear that AI systems cannot be sentient. For instance, the largest survey of professional philosophers, last conducted in 2020, found that 50% of all surveyed philosophers of mind believed or leant towards thinking some future AI systems would have conscious experiences. However, based on our current knowledge, we risk creating conscious AI without realising it’s having conscious experiences. Philosopher Robert Long writes, ‘we don’t yet know what conditions would need to be satisfied to ensure AI systems aren’t suffering, or what this would require in architectural and computational terms.’
If sentience did emerge in AI systems, would this be a problem? An important factor to consider is just how many digital minds there might be in the future. Providing the necessary hardware is available, software can be replicated much more rapidly than biological systems. We could therefore be moving towards a future in which many, or even most, of the moral patients that exist in the world are digital. Failing to treat these minds as moral patients could represent a catastrophe even larger in scale than that of the billions of animals currently in factory farms.
There are a number of reasons why we might expect that if digital minds emerge, their welfare will not be considered important. The Sentience Institute lists reasons including humanity’s history of exploiting and neglecting to help other beings; the widespread existence of speciesism (the tendency to care less about other beings purely because they are of a different species), and scope insensitivity (failing to adequately account for the scale of problems).
The topic of AI sentience raises many questions on which relatively little research has been done. Could AI systems become sentient? How can we steer the development of AI to reduce the chance AI systems suffer? What signs would indicate that they are suffering? Should we try to avoid creating sentient AI? What features (if any) other than sentience could be sufficient for AI systems to count as moral patients? What are the current attitudes to AI welfare, how these might evolve, and what institutions and norms could protect the rights of sentient AI systems?
In the podcast below, philosopher Thomas Metzinger discusses whether we should advocate for a moratorium on the development of artificial sentience.
Explore existing research
- Danaher, John (2020) Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism, Science and Engineering Ethics
- Dehaene, Stanislas, Hakwan Lau, & Sid Kouider (2017) What is Consciousness, and Could Machines Have It?, Science
- Dung, Leonard (2022) Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed, Science and Engineering Ethics
- Elamrani, A. & R. V. Yampolskiy (2019) Reviewing Tests for Machine Consciousness, Journal of Consciousness Studies
- Francken, Jolien C., et al. (2022) An academic survey on theoretical foundations, common assumptions and the current state of consciousness science, Neuroscience of Consciousness
- Gibert, M & Dominic Martin (2021) In search of the moral status of AI: why sentience is a strong argument, AI & Society
- Gomez, P., Daniel B. Shank, Carson Arnold, & Mallory North (2020), Artificial virtue: the machine question and perceptions of moral character in artificial moral agents, AI & Society
- Gordon, John-Stewart and David J. Gunkel (2021) Moral Status and Intelligent Robots, The Southern Journal of Philosophy
- Harris, Jamie & Jacy Reese Anthis (2021) The Moral Consideration of Artificial Entities: A Literature Review, Science and Engineering Ethics
- Ladak, Ali (2023) What would qualify an artificial intelligence for moral standing?, AI and Ethics
- Long, Robert (2022) Key Questions about Artificial Sentience: An Opinionated Guide
- Martinez, Eric & Christoph Winter (2021) Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection, Frontiers in Robotics and AI
- Metzinger, Thomas (2021) Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology, Journal of Artificial Intelligence and Consciousness
- Muehlhauser, Luke (2017) Report on Consciousness and Moral Patienthood, Open Philanthropy Project
- Pannartz, Cyriel M. A., Michele Frisco, & Kathinka Evers (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach, Frontiers in Systems Neuroscience
- Pauketat, Janet V. T., Ali Ladak, & Jacy Reese Anthis (2022) Artificial Intelligence, Morality, and Sentience (AIMS) Survey
- Pauketat, Janet V. T & Jacy Reese Anthis (2022), Predicting the moral consideration of artificial intelligences, Computers in Human Behaviour
- Schwitzgebel, Eric & Mara Garza (2015) A Defense of the Rights of Artificial Intelligences, Midwest Studies in Philosophy
- Shevlin, Henry (2020) General Intelligence: An Ecumenical Heuristic for Artificial Consciousness Research?, Journal of Artificial Intelligence and Consciousness
- Shevlin, Henry (2021) How Could We Know When a Robot was a Moral Patient?, Cambridge Quarterly of Healthcare Ethics
- Shulman, Carl & Nick Bostrom (2021) Sharing the World with Digital Minds
- Tomasik, Brian (2014) Do Artificial Reinforcement-Learning Agents Matter Morally?
- AI, Mind and Society (“AIMS”) Group at the University of Connecticut
- NYU Mind, Ethics and Policy Program
- The Digital Minds Project, a collaboration between researchers at FHI, the Quebec Artificial Intelligence Institute and the University of Montreal
Find a thesis topic
If you’re interested in working on this research direction, below are some ideas on what would be valuable to explore further. If you want help refining your research ideas, identifying a suitable supervisor or finding funding, apply for our coaching!
- ‘What is the precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences—that is, conscious experiences that are pleasant or unpleasant, such as pain, fear, and anguish or pleasure, satisfaction, and bliss?’ (Key questions about artificial sentience: an opinionated guide)
- ‘What exactly does it mean for a system to have a ‘global workspace’? What exactly does it take for a representation to be ‘broadcast’ to it? What processes, exactly, count as higher-order representation? How are attention schemas realized? To what extent are these theories even inconsistent with each other – what different predictions do they make, and how can we experimentally test these predictions?…constructing computational theories which try to explain the full range of phenomena could pay significant dividends for thinking about AI consciousness.’
- ‘In addition to wanting a theory of consciousness in general, we want a theory of (conscious) valenced experiences: when and why is a system capable of experiencing conscious pain or pleasure? Even if we remain uncertain about phenomenal consciousness in general, being able to pick out systems that are especially likely to have valenced experiences could be very important, given the close relationship between valence and welfare and value.’
- ‘Assuming we are several decades away from having a convincing theory of consciousness, what should our “best theory-agnostic guess” about the distribution question be in the meantime?’ (Report on Consciousness and Moral Patienthood)
- ‘It could be useful for a programmer to do something similar to my incomplete MESH: Hero exercise here, but with a new program written from scratch, and with many more (increasingly complicated) versions of it coded so that the consciousness experts and moral philosophers can indicate for each version of the program whether they think it is “conscious,” whether they consider it a moral patient (assuming functionalism), and why.’ (Report on Consciousness and Moral Patienthood – see the report for the complete question).
If you’re able to do interdisciplinary research across computer science and neuroscience:
- ‘What is the precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences—that is, conscious experiences that are pleasant or unpleasant, such as pain, fear, and anguish or pleasure, satisfaction, and bliss?’ (Key questions about artificial sentience: an opinionated guide)
- ‘What exactly does it mean for a system to have a ‘global workspace’? What exactly does it take for a representation to be ‘broadcast’ to it? What processes, exactly, count as higher-order representation? How are attention schemas realized? To what extent are these theories even inconsistent with each other – what different predictions do they make, and how can we experimentally test these predictions?…constructing computational theories which try to explain the full range of phenomena could pay significant dividends for thinking about AI consciousness.’
Some possible directions for further research, that could be explored by examining analogous historical events, include:
- ‘How would AS advocacy affect the trajectory of academic work related to artificial sentience? E.g. would it lead to new ideas and foci or just reinforce the current ones?’ (The History of AI Rights Research)
- ‘What effects would AS advocacy have on AI designers and researchers? E.g. would it polarize these communities? Would it slow down AI safety research?’ (The History of AI Rights Research)
- ‘What effects would AS advocacy have on the credibility and resources of other movements with which it is associated (e.g. animal advocacy, effective altruism)?’ (The History of AI Rights Research)
- ‘Can the trajectory (e.g. development, spread, and regulation) of new technologies be influenced in its early stages by thoughtful actors?’ (Prioritization Questions for Artificial Sentience)
- ‘How much leverage would thoughtful, effectiveness-focused advocates have over a nascent AS advocacy movement?’ (Prioritization Questions for Artificial Sentience)
- ‘What are the most effective ways to protect sentience and design institutions accordingly? Is a “Universal Declaration of Sentient Rights” feasible, and what would it look like (see Woodhouse, 2019)?…Is the traditional legal bifurcation between “persons” and “things” capable of protecting all sentient beings (Kurki & Pietrzykowski, 2017)? How might institutions resolve tradeoffs between very different kinds of interests on behalf of very different kinds of sentient beings (Stawasz, 2020)? How should legal institutions deal with uncertainty regarding what constitutes consciousness (Bourget & Chalmers, 2013), and what entities can be considered as sentient (cf. Sebo, 2018)? What can we learn from the field of animal law, where definitions and attributions of sentience have occasion- ally been incorporated within laws?’ (Legal Priorities Project)
- ‘What are the most effective ways to expand the judicial moral circle to include all sentient beings for the long-term future?’ (Legal Priorities Project)
- ‘What is the precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences—that is, conscious experiences that are pleasant or unpleasant, such as pain, fear, and anguish or pleasure, satisfaction, and bliss?’ (Key questions about artificial sentience: an opinionated guide)
- ‘What exactly does it mean for a system to have a ‘global workspace’? What exactly does it take for a representation to be ‘broadcast’ to it? What processes, exactly, count as higher-order representation? How are attention schemas realized? To what extent are these theories even inconsistent with each other – what different predictions do they make, and how can we experimentally test these predictions?…constructing computational theories which try to explain the full range of phenomena could pay significant dividends for thinking about AI consciousness.’
- ‘In addition to wanting a theory of consciousness in general, we want a theory of (conscious) valenced experiences: when and why is a system capable of experiencing conscious pain or pleasure? Even if we remain uncertain about phenomenal consciousness in general, being able to pick out systems that are especially likely to have valenced experiences could be very important, given the close relationship between valence and welfare and value.’
- ‘Will artificial sentience be autonomous, capable of rational decision-making, or possess other characteristics beyond sentience that might affect (the perception of) moral obligations towards it or its capacity to advocate for its own interests?’ (Prioritization Questions for Artificial Sentience).
- ‘Assuming we are several decades away from having a convincing theory of consciousness, what should our “best theory-agnostic guess” about the distribution question be in the meantime?’ (Report on Consciousness and Moral Patienthood)
Psychology and cognitive science
- ‘More research into how opposition to the malevolent treatment of sentient AIs affects willingness to advocate or protest on their behalf could help to lay the groundwork for advocacy for sentient AIs in the future.’ (Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021)
- ‘How do humans perceive nonhuman AIs who may be homogeneous in appearance or behavior as individuals rather than as a group or an exemplar of a group? What effect does individuation have on the moral consideration of a specific individual? What effect does individuation have on the moral consideration of the whole species or group? (This project idea is analogous to an in-progress project on the individuation of animals.)’ (Artificial Intelligence, Morality, and Sentience (AIMS) Survey: 2021)
- ‘It would be useful to build a range of potential evaluations for machine consciousness and sentience—evaluations that adequately reflect our uncertainty across our various theories of both. How much evidence each of these evaluations provide will inevitably depend on the different accounts of consciousness and sentience we are uncertain over.’ (Amanda Askell)
- ‘What is the precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences—that is, conscious experiences that are pleasant or unpleasant, such as pain, fear, and anguish or pleasure, satisfaction, and bliss?’ (Key questions about artificial sentience: an opinionated guide)
- ‘What exactly does it mean for a system to have a ‘global workspace’? What exactly does it take for a representation to be ‘broadcast’ to it? What processes, exactly, count as higher-order representation? How are attention schemas realized? To what extent are these theories even inconsistent with each other – what different predictions do they make, and how can we experimentally test these predictions?…constructing computational theories which try to explain the full range of phenomena could pay significant dividends for thinking about AI consciousness.’
- ‘In addition to wanting a theory of consciousness in general, we want a theory of (conscious) valenced experiences: when and why is a system capable of experiencing conscious pain or pleasure? Even if we remain uncertain about phenomenal consciousness in general, being able to pick out systems that are especially likely to have valenced experiences could be very important, given the close relationship between valence and welfare and value.’
- ‘Assuming we are several decades away from having a convincing theory of consciousness, what should our “best theory-agnostic guess” about the distribution question be in the meantime?’ (Report on Consciousness and Moral Patienthood)
- ‘What “asks” should advocates actually make of the institutions that they target? What attitudes do people currently hold and what concerns do they have about the moral consideration of artificial sentience? What opportunities are there for making progress on this issue?’ (Prioritization Questions for Artificial Sentience)
- ‘Are the plausible “asks” that advocates could make meaningfully different from adjacent work that is already being done, e.g. animal advocacy, consciousness research?’ (Prioritization Questions for Artificial Sentience)
Further resources
- The Terminology of Artificial Sentience – The Sentience Institute
- Artificial sentience – 80,000 Hours
- Artificial consciousness – Wikipedia
- The Importance of Artificial Sentience – The Sentience Institute
- Is Artificial Consciousness Possible? A Summary of Selected Books – The Sentience Institute
- My mostly boring views about AI consciousness – Amanda Askell
- Digital people FAQ – Holden Karnofsky
Find supervision, mentorship and collaboration
Apply for our coaching and we can connect you with researchers already working in this space, who can help you refine your research ideas. You can also apply to join our community if you’re interested in meeting other students working on this research direction.
Apply for our database of potential supervisors if you’re looking for formal supervision and take a look at our advice on finding a great supervisor.
Our funding database can help you find potential sources of funding if you’re a PhD student interested in this research direction.
- Sign up for our newsletter to hear about opportunities such as funding, internships and research roles.
- Experience Machines is a blog by Robert Long which includes posts on artificial sentience.
This area could be addressed through the lens of moral circle expansion, which explores whether the circle of beings humanity considers morally relevant is expanding over time and whether there are factors that speed this process up if so.
Our other profiles related to artificial intelligence are AI safety and AI governance.
Contributors
This profile was published 3/12/2022 and last updated 25/05/2023. Thanks to Professor Jonathan Simon, Janet Pauketat and Leonard Dung for their helpful feedback. All errors remain our own. Learn more about how we create our profiles.
Where next?
Keep exploring our other services and content
Apply for coaching
Want to work on one of our recommended research directions? Apply for coaching to receive personalised guidance.
Our recommended research directions
Explore areas where we think further research could have a particularly positive impact on the world.
S-risks
Learn how research could help decrease the risk of astronomical suffering in the future.
Explore all our services
Learn about all the services we offer to help you have more impact with your research career.