MRMCD 2023

What if we could just ask AI to be less biased?
02.09.2023 , C205 - Gehirnwäscher
Language: Deutsch

AI models have recently achieved astonishing results and are consequently employed in a fast-growing number of applications. However, since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer from degenerated and biased human behavior. This behavior is human-like, and the model only reflects its training data. Filtering training data leads to a performance decrease. Therefore, models are fine-tuned, e.g., by RLHF, to align with human values. However, the question of which values should be reflected and how a model should behave in different contexts remains unresolved. In this talk, we will look at controllable generative AI systems and present ways to align these models without fine-tuning them. Specifically, we present strategies to attenuate biases after deploying generative text-to-image models.

Patrick is a research group leader at the German Center for Artificial Intelligence (DFKI). His research revolves around deep large-scale models and AI ethics. Together with his colleagues at TU Darmstadt, he aims to build human-centric AI systems that mitigate the associated risks of large-scale models and solve commonsense tasks. Recently he published in Nature Machine Intelligence, CVPR, FAccT, ICML and won the outstanding paper award at NeurIPS with the LAION-5B dataset.