09-07, 18:30–19:20 (Europe/Berlin), Lichtspielhaus
Fairness and accountability are important topics for decision makers, but formulating these in a rigid, mathematical context can be hard. We'll be looking at implicit and explicit biases, how they arise in data and how they are learned by modern sytems, different notions of the word "fair" and the way biases can leak through obfuscation.
Everybody is talking about machine learning and artificial intelligence and exciting developments are happening in this field. But the dark side of large scale data mining and overly ambitious AI developers is already visible in many high profile cases. Implicit and explicit biases inherent in datasets can influence decision algorithms to replicate the problems and discrimination of human society.
In this talk we are going to ask why and how machine learning algorithms become prejudiced and we will see, how hard it is to create algorithms which are fair and unbiased. At the core, we are going to take a look at what fair actually means and that objective fairness is a very difficult goal.
Master student in Computer Science, Focus on Machine Learning and Natural Language Processing, currently interested in Gender Equality/Fairness/Equity and Implicit Bias
I'm a masters student at TU Darmstadt studying computer science with a focus on machine learning, data science and AI. I'm interested in how "smart" systems will interact with our society and shape the way we think, feel and live in this world.