Biasness in AI — part 4

Afaghasanly
5 min readJun 23, 2021

--

How to deal with data bias?

How can we resolve bias issues in models and data?

Welcome to 4th and last part of the series where I try to discuss and mention solutions which can be used against biases. The first and hard step in creating an AI model is acknowledging that your model and data may have bias and check for it and find the bias in your model. Majority of time, modelers use ready datasets and ignore to check the data they have collected, do analysis on that, which leads to serious outcomes/biased models. However, with growing concern about AI models in society, it is now even more important to control it.

How can we check and evaluate data bias? One of the easiest ways is called disaggregation method, in which we try to evaluate the model for each subgroup that we think is important. For example, check face detection for white and black females separately, or male and females. Find recall, precision, calculate error rate for each and compare whether they are similar or different. If the model or data is not biased, then these evaluation metrics should be similar [11].

Another way to check your model bias is to develop a complicated test set, which you think can be harmful to the individuals, and evaluate your model on that test set. If it passes the desired score, then we can decide that model is not biased. That held-out test set should not have to be big; it is enough to have a small one, we should be sure that it consists of all the scenarios we want.

In machine learning models, people try to remove bias either by mitigation techniques (remove variables, samples, etc., which cause the undesired output) or inclusion methods (insert new variables, etc.).

Margaret Mitchell, the researcher at Microsoft, discusses and elaborates in one of her papers [7] how multi-task learning with increased inclusion helped remove bias.

The authors took clinical and Twitter data to get people with depression and suicide attempts. They compared simple logistic regression, single-task learning (STL), and multiple task learning (MTL) models. Researchers evaluated performance for the scenario where they have many data MTL models proved significantly better in terms of accuracy and unbiased outputs. In contrast, for little data cases, MTL was just slightly better than logistic regression. In one of her speeches, she mentions how they decided not to reveal the method and examples they considered a depressed person’s tweets. Because later, it can cause discrimination towards the people who use the same kind of language in their social platforms.

In recent papers [10][2], scientists also debate how to mitigate bias with adversarial multi-task learning. In these models, authors do not consider only the prediction which they want, but they also consider the thing which their output should not be affected by. For example, let’s assume the automated cv reviewer in the hiring processes. So, as an output, we want people with certain qualifications, scores, and experience, but we do not want gender to affect our prediction. In this case, adversarial modeling helps us remove undesired and problematic signals (in this case, it is gender).

In 2018, Google scientists started work about mitigating unintended bias in text classification [13]. During their research, they used Wikipedia Talk Pages as a data source. However, in the evaluation process, they have seen that certain sentences are unreasonably assigned to the high-toxic class. Later, they found that the sentiment is not balanced for these similar words, such that “gay”,” Muslim”,” lesbian”, etc. Authors get new, more positive data about these kinds of words, mined Wikipedia articles to their dataset, and balanced the sentences’ length. They prove with evaluation metrics such as AUC, error rate equality that these techniques really helped mitigate the unintended bias.

To address modelers’ bias problem, now people do not only publish some plots from the dataset, but also the information about annotators, their background, key facts about data, etc. which helps to understand what kind of bias data consists of. This publication form is called Datasheets for datasets [11].

Conclusion

In this series we discussed what kind of biases exist in the AI world and their causes, methods to deal with them. We have to analyze the data we are using while building a new model.

In the century of a fight with racism, sexism, religious discrimination, or homophobic ideas, biased data to train AI can cause very inappropriate and sometimes even dangerous consequences for society. Therefore, this problem should be addressed properly. To understand whether AI can harm people, we should monitor their results and outcomes for the long term. The monitoring system should be established by product owners or AI engineering in order to predict and prevent upcoming disasters.

Other parts of the series:

I would appreciate if you share your opinions about article.

[1] Masoud Mansoury,Himan Abdollahpouri,Mykola Pechenizkiy,Bamshad Mobasher,Robin Burke. In Feedback Loop and Bias Amplification in Recommender Systems, 2020.

[2] Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. In Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations, 2017.

[3] Kate Crawford and Trevor Paglen. In The Politics of Images in Machine Learning Training Sets, 2019.

[4] Jeffrey Dastin. In Amazon scraps secret AI recruiting tool that showed bias against women, 2018.

[5] Rich Caruana, Paul Koch, Yin Lou, Marc Sturm, Johannes Gehrke, Noemie Elhadad. In Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission, 2015.

[6] Laurence Hart. What data will you feed your artificial intelligence? February 2018.

[7] Adrian Benton,Margaret Mitchell,Dirk Hovy. In Multi-Task Learning for Mental Health using Social Media Text, 2017.

[8] H.Tankovska. In Twitter: number of monetizable daily active U.S. users 2017–2020, 2021.

[9] Prabhakar Krishnamurthy. In Understanding Data Bias.Types and sources of data bias, 2019.

[10] Brian Hu Zhang, Blake Lemoine, Margaret Mitchell. In Mitigating Unwanted Biases with Adversarial Learning, 2018.

[11] Margaret Mitchell. In Bias in the Vision and Language of Artificial Intelligence,2021.

[12] Julia Angwin, Jeff Larson, Surya Mattu , Lauren Kirchner, ProPublica. In Machine Bias., 2016.

[13] Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman. In Measuring and Mitigating Unintended Bias in Text Classification, 2017.

[14] JORDAN WEISSMANN. In Amazon Created a Hiring Tool Using A.I.It Immediately Started Discriminating Against Women., 2018.

--

--

No responses yet