Invitation to Attend: Stereotypes and Discrimination in Artificial Intelligence Community Call

Hi everyone :smiley:

As language models are used in more and more applications, there is greater potential to impact all aspects of our lives, from health, to relationships, to employment, to education. It has become urgent to audit them and assess the risks for harms that they might carry.

Data Futures Lab hosts monthly community calls to discuss and tackle these challenges. This month’s call will feature Fundación Vía Libre, who will present EDIA (Stereotypes and Discrimination in Artificial Intelligence).

EDIA is a graphical tool that facilitates this by enabling users to demonstrate biases using lists of words and sentences. For example, one can assess whether a given language model is more likely to produce sentences associating violence with poverty or sentences associating violence with wealth. These assessments can then be systematized to facilitate decision-making.

The call will take place on Monday, July 17 at 3pm UTC. Come to learn more and engage with the team. Sign up using this form, and feel free to share more widely. In the meantime, you can read more about EDIA and demo it here.

Meet you there :dancer:

1 Like