Overview
Artificial Intelligence: Ethics & Societal Challenges is a four-week course that explores ethical and societal aspects of the increasing use of artificial intelligent technologies (AI). The aim of the course is to raise awareness of ethical and societal aspects of AI and to stimulate reflection and discussion upon implications of the use of AI in society.
The course consists of four modules where each module represents about one week of part-time studies. A module includes a number of lectures and readings. Each lesson finishes with a mandatory assignment in which you write a short sum-up of the most important new knowledge/insight you gained from this lesson, and review a lesson sum-up written by another student/participant. The assessments are intended to encourage learning and to stimulate reflection on ethical and societal issues of the use of AI in society. Participating in forum discussions is voluntary but strongly encouraged.
In the first module, we will discuss algorithmic bias and surveillance. Is it really true that algorithms are purely logical and free from human biases or are they maybe just as biased as we are, and if they are, why is that and what can we do about it? AI in many ways makes surveillance more effective, but what does it mean to us if we are increasingly being watched in more and more sophisticated ways?
Next, we will talk about the impact of AI on democracy. We discuss why democracy is important, and how AI could hamper public democratic discussion, but also how it can help improve democracy. We will for instance talk about how social media could play in the hands of authoritarian regimes and present some ideas on how to make use of AI tools to develop the functioning of democracy.
A further ethical question concerns whether our treatment of AI could matter for the AIs themselves. Can artefacts be conscious? What do we even mean by “conscious”? What is the relationship between consciousness and intelligence? This is the topic of the third week of the course.
In the last module we will talk about responsibility and control. If an autonomous car hits an autonomous robot, who is responsible? And who is responsible to make sure AI is developed in a safe and democratic way?
The last question of the course, and maybe also the ultimate question for our species, is how to control machines that are more intelligent than we are. Our intelligence has given us a lot of power over the world we live in. Shall we really give that power away to machines and if we do, how do we stay in charge?
At the end of the course, you will have
· a basic understanding of the AI bias phenomenon and the role of AI in surveillance,
· a basic understanding of the importance of democracy in relation to AI and acquaintance with common issues with democracy in relation to AI,
· an understanding of the complexity of the concepts ‘intelligence’ and ‘consciousness’ and acquaintance with common approaches to creating artificial consciousness,
· a basic understanding of the concepts of ‘forward-looking’ and ‘backward-looking responsibility’ and an acquaintance with problems connected to applying these concepts on AI,
· a basic understanding of the control problem in AI and acquaintance with commonly discussed solutions to this problem,
· and an ability to discuss and reflect upon the ethical and societal aspects of these issues.
Syllabus
- Algorithmic Bias and Surveillance
- Democracy
- Artificial Consciousness
- Responsibility and Control
Taught by
Maria Hedlund, Lena Lindström and Erik Persson