Workshop 3: Co-design critical objects

Final Major Project / 09

October 13, 2021

Luchen Peng, Tiana Robison, Jinsong (Sylvester) Liu






After getting some critical object ideas during the summer break, we felt that there were limitations in online workshops and individual making. Independent making leads to a lack of discussions and debates between members. In that case, the generated ideas highly depend on the participant's expertise, understanding and opinion. The most obvious example is that my friend Gangwei, a game product manager, holds an optimistic view of Al technology. When being asked to critique biased AI, he struggled to engage and find arguments. In the next making part, he also felt lost with doing something against his thought. Therefore, we want to explore combing people from different backgrounds and see how the debate flows in the group.


The Goal


We decided to iterate from the previous learning, with the most significant change in assigning three roles (designer, data scientist and general public) in one group. Two design students, two people who studied or worked on Al-related projects, and two from other backgrounds. Moreover, we engage the artefact analysis framework and translate it into a structured Q&A section after making. The objects should avoid being off the topic and be more rigorous.


Participants

Process


The process roughly followed the previous workshop:
  1. First, we give a brief introduction to the project theme.
  2. Explaining data bias examples in health, finance and employment
  3. I will introduce what is critical design and review an example. This time I choose to talk about Project Alias, a product that covers a smart speaker and addresses privacy.
  4. Show selected models from the previous workshops to inspire.
  5. Assigning two groups with designer, tech expert and general public 5. Co-designing critical artefacts
  6. Each group introduce their idea.
  7. Each group answer the structured questions and have the opportunity to refine their artefact.


Observation and thoughts


The workshop completes our anticipations in serval ways. Foremost, our introduction part became more structured and went smooth. Most participants can differentiate data bias from recommendation algorithms, and related to their daily experience. Pranjal discussed his previous professional background in an AI product, which gives us a peek at how the industry started considering technology ethics. Moreover, group activity in prototyping works better than the previous individual settings. However, we also noticed that participants showed different scales of willingness in collective thinking. It might result from an uneven activity's responsibility, hinting the "public" should learn rather than contribute more.