BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//pretalx//talks.mrmcd.net//2023//talk//ATTUVF
BEGIN:VTIMEZONE
TZID:CET
BEGIN:STANDARD
DTSTART:20001029T040000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20000326T030000
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:pretalx-2023-ATTUVF@talks.mrmcd.net
DTSTART;TZID=CET:20230902T140000
DTEND;TZID=CET:20230902T143000
DESCRIPTION:AI models have recently achieved astonishing results and are co
 nsequently employed in a fast-growing number of applications. However\, si
 nce they are highly data-driven\, relying on billion-sized datasets random
 ly scraped from the internet\, they also suffer from degenerated and biase
 d human behavior. This behavior is human-like\, and the model only reflect
 s its training data. Filtering training data leads to a performance decrea
 se. Therefore\, models are fine-tuned\, e.g.\, by RLHF\, to align with hum
 an values. However\, the question of which values should be reflected and 
 how a model should behave in different contexts remains unresolved. In thi
 s talk\, we will look at controllable generative AI systems and present wa
 ys to align these models without fine-tuning them. Specifically\, we prese
 nt strategies to attenuate biases after deploying generative text-to-image
  models.
DTSTAMP:20260419T180004Z
LOCATION:C205 - Brainwasher
SUMMARY:What if we could just ask AI to be less biased? - Patrick Schramows
 ki
URL:https://talks.mrmcd.net/2023/talk/ATTUVF/
END:VEVENT
END:VCALENDAR
