My name is Alexa and I’m not your personal assistant

This session is facilitated by Alexa Steinbrück

Show on schedule

About this session

The session will start with a reflection on our relationship with voice assistants: it’s often surprising and fascinating how different people interact with these systems in their life.

Then we will go on by demystifying the technical inner workings of a voice assistant and pinpoint where is the AI in them.
Then we will look at how the makers of these tools construct a pseudo-humanity for them, focusing on their gendering as female and how that is problematic on many levels.
Finally, we will re-imagine and re-design these tools: In a hands-on exercise small groups of participants will imagine alternate personas for voice assistants. Can we imagine a non-female persona as an assistant? Can we approach AI in a non-anthropomorphic way?

Goals of this session

This session is a call to action towards questioning the ongoing humanisation of AI assistants, especially the gendering and “feminisation” of these systems and how that perpetuates (harmful) gender stereotypes.

This session should encourage people of diverse backgrounds and genders to question the status quo of voice assistants on the market and brainstorm how these systems can be designed differently, so that they do not rely on gender stereotypes.

I would also like to raise awareness about the limitations of these AI systems. The humanisation of AI is a general problem because it suggests that AGI (Artificial General Intelligence) is already there, but it is not. News tend to use phrases as “An AI did this”, as if these systems have intentionality, consciousness and human-like flexible intelligence. Dissecting the anatomy of these systems is an important step in demystifiying AI.

1 Like

An AI machine as an extension of yourself, your brain, your lifestyle. A handy assistance.

Like McLuhan said in Understanding Media ‘the medium is the message’ and “the global village" are now part of the lexicon, to challenge our sensibilities and our assumptions about how and what we communicate.

Our idea for our personality is one that is defined as an entity, an entity that doesn’t have one personality but multiple personalities. It never uses the word ‘I’, its aim is to achieve a task, it attempts to use language in non human ways, maybe even using sound to communicate.

Personalised modes, aware of private and public contexts and responds accordingly.

I.e. Confidential information should only be used in a private context where you’re alone.



Thanks to all participants for making this an amazing session!

Here’s a blogpost about the workshop: https://medium.com/@alexasteinbrueck/my-name-is-alexa-and-im-not-your-personal-assistant-workshop-at-mozfest-london-27-10-19-9364fbc54a22#943b

And you can find me on Twitter as @alexabruck

More session notes:

Comments from the groups about Amazon Alexa and similar voice assistants:
• Frustrated
• Privacy considerations, e.g. in the work place
• Dehumanising the users
• Intriguing

Activity: Alternative personas for voice assistants
Group 1 reporting back:
• Not living
• Mountain is the personality –very calm, wisdom, few words, echo back of own noise if you try to test on your own humanity
• Suitable for anyone who seeks peace
• There is a voice to the mountain but has no personality

Group 2:
• The personality is a bit creepy
• Morphy – to have any character that you like, e.g. favourite Disney character, favourite actor,
• Funding model, because companies can pay to have their characters featured on the list
• It could be a human character, or an animal, or piece of fiction
• Can switch easily between the characters, e.g. time of day
• Could be different character depending on who is asking the question
• Response to inappropriate questions would be dependent on the character, e.g. Patsy from AbFab

Group 3:
• Entity that has multiple personalities
• Different characters depending the people asking questions
• Aim is to achieve a task, not your friend
• Never uses the word I
• Use language in non-human ways

Group 4:
• Human like in personal situations
• In work context it would be more robot like
• For confidential info it should only react when you are alone in the room, so helpful to have different approaches for different situations
• Personality would be an extension of yourself, and you could adapt it accordingly

Group 5:
• Water, fire, air and earth
• Different elements according the task that is being asked, e.g. lights on
• Emotional support would be good, e.g. for older people but privacy needs to be very important here
• Chit chat responses to be developed within local area, so they are more appropriate for different cultures