This session is facilitated by Alexa Steinbrück
About this session
The session will start with a reflection on our relationship with voice assistants: it’s often surprising and fascinating how different people interact with these systems in their life.
Then we will go on by demystifying the technical inner workings of a voice assistant and pinpoint where is the AI in them.
Then we will look at how the makers of these tools construct a pseudo-humanity for them, focusing on their gendering as female and how that is problematic on many levels.
Finally, we will re-imagine and re-design these tools: In a hands-on exercise small groups of participants will imagine alternate personas for voice assistants. Can we imagine a non-female persona as an assistant? Can we approach AI in a non-anthropomorphic way?
Goals of this session
This session is a call to action towards questioning the ongoing humanisation of AI assistants, especially the gendering and “feminisation” of these systems and how that perpetuates (harmful) gender stereotypes.
This session should encourage people of diverse backgrounds and genders to question the status quo of voice assistants on the market and brainstorm how these systems can be designed differently, so that they do not rely on gender stereotypes.
I would also like to raise awareness about the limitations of these AI systems. The humanisation of AI is a general problem because it suggests that AGI (Artificial General Intelligence) is already there, but it is not. News tend to use phrases as “An AI did this”, as if these systems have intentionality, consciousness and human-like flexible intelligence. Dissecting the anatomy of these systems is an important step in demystifiying AI.