Workshop 2: Rebellion on Biased AI

Final Major Project / 06

August 20, 2021

Luchen Peng, Tiana Robison, Jinsong (Sylvester) Liu





The Goals and Participants


After the initial brainstorming, we are excited about our design concepts and want to see what people from different backgrounds develop. Therefore, we planned to run some co-design workshops with those who are not from the course. Serval goals gradually revealed within discussions:
  • Investigate and compare understanding and attitudes of “general public” and “data science practioners”.
  • Discover how people might live with data bias in the future.
  • Co-design and get inspiration.

As a start, we tried to contact someone from a data science or computation background, along with some friends interested in this topic. It remains unthoughtful to refer to some as “general public”, but we will iterate along the way.

Participants




Workshop V2 process plan. Image by Sylvester.

Process


It is rather difficult to balance throughout explanations of the subject while keeping it short. On the one hand, the introduction and examples of critical design would profoundly affect the participants’ co-design direction. On the other hand, it seems impractical to cover all our research data bias and critical design research. Our group spent a few days crafting the presentation.

  • Luchen introduced the data bias
  • Tiana presented examples in three sectors (finance, health, workplace)
  • I stayed with the critical design theory and case studies.


An Overall Impression


The workshops, in general, are beneficial for our project. Co-design is an essential part of our process, not only because it directly generates ideas, but also because it facilitates discussions, debates and insights. By comparing proposals of both experts and the “general public”, we noticed that participants from a computational background could concede the subject much faster and comprehensively.

Workshops

Expert interviews with Hassan, Siqi and Mr Choi helps us gain more knowledge about the fundamental machine learning framework and where could data bias occurs. We were delighted to learn that technology ethics are already a required subject in computer science degrees and the workplace. Meanwhile, it remains a professional acquaintance that we can only look at in the distance.

Mr Choi explaining machine learning

Other participants, on the other side, found the theme obscure. But it subtly correlated to information cocoon, digital privacy, inequity and polarisation, which they can experience on daily basis. Applying critical design to bridge this noticeable gap is challenging for both us and participants. Even though the workshop serves as a promising source, it requires more effort to iterate the explaining, discussion and building.

Where does bias come from? Chart by Luchen.

In the next post, I will dig into the details of the artefact analysis in some of the fantastic creations.



References


  • Brandt, E., Binder, T. and Sanders, E. (2013). Tools and Techniques. In: J. Simonsen and T. Robertson, eds., Routledge International Handbook of Participatory Design. New York: Routledge, pp.145–181.

  • Google Cloud Platform (2017). The 7 Steps of Machine Learning (AI Adventures). YouTube. Available at: https://www.youtube.com/watch?v=nKW8Ndu7Mjw [Accessed 27 Aug. 2021].

  • Harlan, E. and Schnuck, O. (2021). Objective or Biased. [Website] Available at: https://interaktiv.br.de/ki-bewerbung/en/ [Accessed 27 Aug. 2021].

  • The Oxford Internet Institute (2020). The A-Z of AI. [online] The A-Z of AI. Available at: https://atozofai.withgoogle.com/intl/en-GB/ [Accessed 27 Aug. 2021].