Bias in AI Models

Ishrak
2 min readFeb 24, 2021

In the book Weapons of Math Destruction, the author defines an AI model in terms of the inherent bias of such models. The author also describes some examples of bias in AI models. According to the author, AI models are products of opinion decorated by the intricacies of mathematics. In fact, the success of these models depends on how the designer of these models define success. Therefore, the output from these models reflect nothing but the opinions of the designers in the disguise of mathematical abstractions. These models are heavily influenced by the morals, preferences and priorities of the designers themselves. And, the bias rising from such influence are felt by the mass who are victims of such bias.

In the book, O’Neil gives us a contrasting picture between a good model and a bad model. Sometimes, because of the lack of suitable data, we are compelled to use proxy data that partially explains a situation but are mistaken in the sense that they have an incomplete picture of the environment. When coupled with the designers’ bias, the harmful effects of these AI are resonated by these proxy data. For instance, in the Washington School District, teachers were fired based on the poor performance of students. And, this decision was governed by an AI model called IMPACT. As a result, even if some teacher performed well with respect to the school’s principal or the students themselves, if their IMPACT score came out poor, they would be fired. One such teacher who was praised by the principal herself lost her job becaues of this biased model and had to move to a rather affluent school district in Virginia where human evaluation was held higher than AI evaluation. This model was clearly biased in the sense that it did not factor in a student’s specific socio-economic circumstances and that too because these socio-economic data are very complex to model. So, in the disguise of so-called complex models that were unjustifiable these teachers were immorally fired.

On the other hand, O’Neil compares the statistical models from the baseball games where the data is abundant and transparent. As a result, even a layman audience could explain why a baseball AI model would give some specific predictions or forecast. This is where the difference is created when we use transparent and continuous streams of data as opposed to shady proxy data. At the same time, the AI designers’ personal confirmation bias can create a chain of incorrect dogma within the AI model itself.

--

--