Lara Lawniczak

Lara Lawniczak

[email protected]

Lara studied Communication and Sociology at the Westfälische-Wilhelms-University in Münster. She now continues to study a Master of Science in Computing in the Humanities at the Otto-Friedrichs-University in Bamberg, which is meant for students from the social sciences and humanities who want to dive into informatics and acquire essential skills in the domain.

Normative universals for intelligent machines: Are there universal values by which researchers can align Artificial Intelligence?

Author's note:

Summary of Thesis

Morality in the context of Artificial Intelligence is a complex topic which is far from being fully discovered and understood. As several disciplines make an effort to better understand human morality and values, Computer Science and other disciplines working on Artificial Intelligence can hugely profit from cooperation with these disciplines.

One discipline which has studied morality extensively is Sociology. The Discourse Ethics of Jürgen Habermas is one of many theories which give advice on what constitutes a moral action and especially on how humans come to agree on it.

Habermas discourse ethics is a consensus theory of normative correctness and therefore relevant for discovering universal values. By implementing the procedure of the discourse and the principle of consensus proposed by Habermas in Artificial Agents, one might overcome the practical constraints to the theory.

There are some attempts at making ethical decisions in Artificial Intelligence which rely on some of the same principles as Habermas’ discourse ethics, such as the “voting based system for ethical decision making“ by Noothigattu et al. from the Massachusetts Institute of Technology.

If we combined the strategy proposed by Noothigattu et al with concepts from distributed Artificial Intelligence and multi-agent-systems, we might be able to implement Habermas discourse ethics in Artificial Intelligence. This can be useful to both discover universal values without the constraints discourse ethics brings when implemented by humans and to equip intelligent agents/ systems with a tool for making value decisions in real time.

The suggestion here is far from being imperfect and probably comes with several problems yet to be solved. However, it does offer an idea of how theories from the social sciences can help to advance discussions of ethics in Artificial Intelligence.

Why is this important

Considering moral questions while trying to build Artificial Intelligence is essential to prevent risks which arise from creating an unaligned intelligent agent or losing control over it.

The thesis offers a starting point by suggesting a model to find universal values and to enable ethical decision making in intelligent agents which is based on discourse ethics. It has yet to be tested and implemented.

It hints at other relevant social theories which could be analysed for possible contributions to value questions in Artificial Intelligence.

It emphasizes the importance of social mechanisms for what we consider morally right and wrong and underlines the difficulty of morally evaluating actions outside of a social context.

Most importantly, the thesis shows that a closer collaboration between the social sciences (and other fields such as psychology and cognitive science!) and computer science can be extremely useful in the field of ethics in Artificial Intelligence.

Strengths and weaknesses

Writing a thesis is in many ways equal to going on a journey without knowing the destination. That was especially the case for me, as the question I tried to answer has not been asked by many people yet, especially not by people from the social sciences. The process of collecting information and learning about ethical frameworks and existing value alignment approaches by AI experts was fascinating and strengthened my feeling of having the right topic: One that truly interests me and has a big impact on society as well. However, incorporating this amount of information in a coherent paper turned out to be one of the biggest challenges for me. I eventually had to set a focus, which I achieved by concentrating on the moral theory of only one sociologist that I tried to connect to existing AI alignment approaches. As there was no perfect fit, I suggested a model to implement my chosen moral theory, or something resembling it, in AI. However, due to my at that time still slim knowledge of the domain, I wasn’t able to evaluate or test my approach and have probably missed a few relevant aspects which someone with a more technical background might stumble upon. These are weaknesses of my work. 

On the other hand, I believe that I managed to lay out the relevant problems relating to the questions of moral values and value alignment in AI quite well. As I don’t yet have the full technical understanding, I described relevant concepts and challenges in a way that people from a variety of backgrounds will be able to understand. Moreover, I think that my efforts to connect insights from sociology to questions in AI can be used as an example for future projects following a similar pattern. I know that there is a lot to be improved about my thesis (and hopefully I will get to that in the future), but I do believe that it is a great starting point for other students of sociology (or any other discipline from the social sciences) who find themselves interested in AI and value alignment. Personally, I see the biggest impact of my work in it being a first move in the direction of closer collaboration between social and technical sciences in the AI sector. I do hope that others will also see the interrelation of the domains, be it in value alignment or technology governance, and join me in working towards a more interconnected scientific community in which computer engineers and sociologists can learn and benefit from each other's expertise. 

In which ways I have changed my mind since I finished writing my thesis

My original plan for the thesis was to contrast different social theories based on their contribution to the question of the existence of universal values. However, I soon discovered that firstly, this was a task not to achieve in one thesis, and that secondly, the topic was much more complex than I thought. Next to sociological theories on morality, I stumbled upon a variety of papers from evolutionary biology, psychology, and cognitive science, which offer even more angles from which one can consider such a question. While I was always convinced that a collaboration between social sciences and computer science is beneficial, I have come to realize that the collaboration needs to include these other disciplines as well in order to grasp the full picture and that there is a lot of work to be done!

Moreover, I have learned that finding a set of universal values is not the only or even the most important one to be solved. As a non-tech person and a newbie to the technological challenges, it was interesting and eye opening to learn about the technical challenges of creating agents which make moral decisions in a variety of situations. It has inspired me to pursue an informatics degree to deepen my understanding.

Recommendation based on my experience

Finally, a quick recommendation for anyone interested in similar topics: I know the amount of possible questions and approaches is large. I know it can be tempting to try to take everything into account and do justice to the vastness of the field. However, as someone whose main struggle was to keep a clear focus after having formulated a very vague research question, I can only recommend you to formulate a very precise research question for your thesis, even though it might feel like you are leaving out important aspects. After all, no one will be able to solve the problem of value alignment in AI in a single thesis, and it shouldn’t be your goal to achieve that. Try to keep it focused and clear, and others (and yourself) will be able to build on your work in the future!

Last but not least, I want to say thank you to the Effective thesis team, especially David, for supporting me during my thesis and being a source of inspiration and knowledge. You are doing a great job!

If you want to read my thesis, just let me know ([email protected]) and I will send you a copy. Unfortunately, it is only available in German, so keep that in mind. However, I am also happy to chat about my findings in case you don’t speak German - just reach out and we can connect. 

Lara Lawniczak

Find a meaningful topic for your thesis