Introduction to biasness in AI — part 1

Afaghasanly
3 min readJun 23, 2021
Photo by Sigmund on Unsplash

“You are what you eat.” I consider most people heard this saying once in a lifetime. Even though it was addressed to humans, it is more acceptable for our AI systems and algorithms [6]. “Artificial intelligence is only smart as the information it is fed.”[4].

In short, if we give inconsistent or biased data to the algorithm, it will give you biased and inconsistent predictions for the future. The “Garbage in garbage out” policy is working for almost all AI models. I hope all of us remember, just a few years ago, in 2018, Reuters exposed Amazon’s biased hiring algorithm, which was sexist towards women? It was said that the algorithm was penalizing candidates if there were mentions about women. Why did it happen? If we try to explore the root of the problem, we will see that data that was fed to the model as an input was historical data of amazon hiring from all past years in which male candidates and employees were dominated, especially in the fields like a statistician, software engineering, etc. The model considered that Amazon preferred male employees, so it penalized women candidates in the hiring process.

Of course, that is not the only inappropriate case that occurred because of the biased data. In the early 2010s, the risk assessment model used by America in order to predict future criminals and algorithms’ output was considered highly trustful and used in court decisions. However, after investigations, it was clear that the predictions which were made for the criminals were correct only for 20%.Only 20% of the high-risk people continued involved with crime in the next 2 years.

Howbeit, that was not the only problem of the algorithm; the main issue was that the model was racist and assigned high-risk values for black people [12].While usage of data science techniques is increasing day by day in life involving domains, these techniques’ fairness is becoming more serious anxiety. What if our algorithm is racist, sexist, or discriminates against any other minority at the end of the day?

It reminds me notable mention which was made by Slate [14], “All of this is a remarkably clear-cut illustration of why many tech experts are worried that, rather than remove human biases from important decisions, artificial intelligence will simply automate them.”

If you are also interested about the data bias and its solutions, I discuss topics such as biased data and models, reasons and things we can do to overcome them, with some examples in the next sections:

Would like to hear your opinions about my article.

[1] Masoud Mansoury,Himan Abdollahpouri,Mykola Pechenizkiy,Bamshad Mobasher,Robin Burke. In Feedback Loop and Bias Amplification in Recommender Systems, 2020.

[2] Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. In Data Decisions and Theoretical Implications when Adversarially Learning Fair Representations, 2017.

[3] Kate Crawford and Trevor Paglen. In The Politics of Images in Machine Learning Training Sets, 2019.

[4] Jeffrey Dastin. In Amazon scraps secret AI recruiting tool that showed bias against women, 2018.

[5] Rich Caruana, Paul Koch, Yin Lou, Marc Sturm, Johannes Gehrke, Noemie Elhadad. In Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission, 2015.

[6] Laurence Hart. What data will you feed your artificial intelligence? February 2018.

[7] Adrian Benton,Margaret Mitchell,Dirk Hovy. In Multi-Task Learning for Mental Health using Social Media Text, 2017.

[8] H.Tankovska. In Twitter: number of monetizable daily active U.S. users 2017–2020, 2021.

[9] Prabhakar Krishnamurthy. In Understanding Data Bias.Types and sources of data bias, 2019.

[10] Brian Hu Zhang, Blake Lemoine, Margaret Mitchell. In Mitigating Unwanted Biases with Adversarial Learning, 2018.

[11] Margaret Mitchell. In Bias in the Vision and Language of Artificial Intelligence,2021.

[12] Julia Angwin, Jeff Larson, Surya Mattu , Lauren Kirchner, ProPublica. In Machine Bias., 2016.

[13] Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, Lucy Vasserman. In Measuring and Mitigating Unintended Bias in Text Classification, 2017.

[14] JORDAN WEISSMANN. In Amazon Created a Hiring Tool Using A.I.It Immediately Started Discriminating Against Women., 2018.

--

--