Proposal: Materialising Data Bias

Final Major Project / 01

July 10, 2021

Luchen Peng, Tiana Robison, Jinsong (Sylvester) Liu






Why this project?


My Final Major Project’s topic originated from the study proposal and one of my side projects: Bubble Trouble. While exploring various discursive subjects in the MAUX course, I would like to bring back to the attention and emphasis further my focus, human-technology relations.

One of our primary sources of accessing information is digital platforms like search engines, social networks, news aggregators. It covers various daily topics from politics to fashion to science. These online information providers ubiquitously deploy complex, sophisticated algorithms to tailor profile-based content, which accumulates in shaping receivers’ opinions and decisions (Pitoura et al., 2018). In the previous project, I’ve learned that the growing used recommendation algorithms in countless products and services worldwide exhibit some forms of bias, causing what Eli Pariser called “Filter Bubble”.

However, while many research and project (including my previous one) tried to address its ethical and moral implications at the social and corporational scale, such an intricate event seems to be unnoticeable at a personal level. And it will undoubtedly co-exist with us for a long time.


AI Fairness 360. Source: IBM.
Visualizing filter bubble. Source: MIT Technology Review.


Therefore, I proposed the questions:

How can we make our once invisible information bias tangible to understand it more?

If we see or touch it, what possibility could it be to live with or mitigate the bias?



Assembling the team


I discussed briefly with Tiana in the Micro UX term then surprisingly found our research interests align somehow. To my understanding, she focused more on AI ethics in the industry, such as house rent, job recruitment and social media advertisements, while my initial thought was on personal impacts. Luchen also showed strong interest in this topic, and she had done relevant projects and profound knowledge of communication from her bachelor’s degree. Even though our focus varies, I suggested the three of us collaborate while staying open and prepared to embrace divergence and different outcomes.


Project timeline by group Project timeline by group. Designed by Sylvester.


Discussion and tutorial


In serval tutorials with John, he expressed excitement and positivity to our team and project. On the other hand, he also showed concern about the technical part of algorithms that we will likely encounter after. A clever and dexterous way to tackle this problem, as John and Li mentioned, is to remove the technological shell and focus on the human counterpart. So he advised we change the topic into data bias instead of algorithmic bias.


Setting boundaries


The project definition remains broad at this stage, and we find it difficult to narrow the sectors and target audience. We have Digital Natives, practitioners of data science or relevant field, young children and their parents, and digital ethics organisations as potential audiences, some of them overlapping.

As for possible outcomes, a physical installation seems intriguing: a device that one can interact with and experience how their actual behaviour (physical or digital) change presented information and caused cognitive bias. Workshops are also essential in finding appropriate metaphors and co-designing a way to mitigate data bias. The final idea is making Critical Design Artefacts that showcases and reversely disrupts information bias.

An example of critical design arterfacts. Source: Bjørn Karmann


Blog posts ©sylvesterlau, 2021