It’s not the algorithm behaving badly, but how we define fairness that determines an artificial intelligence system’s impact. Bias is often identified as one of the major risks associated with artificial intelligence (AI) systems. Recently reported cases of known bias in AI — racism in the criminal justice system, gender discrimination in hiring — are undeniably worrisome. The public discussion about bias in such scenarios often assigns blame to the algorithm itself. The algorithm, it is said, has made the wrong decision, to the detriment of a particular group. But this claim fails to take into account the human component: People perceive bias through the subjective lens of fairness. This is the subject of this article.
Important Note
Content Editors rate, curate and regularly update what we believe are the top 11% of all AI resource and good practice examples and is why our content is rated from 90% to 100%. Content rated less than 90% is excluded from this site. All inclusions are vetted by experienced professionals with graduate level data science degrees.
In the broadest sense, any inclusion of content on this site is not an endorsement or recommendation of any service, product or content that may be discussed, recommended, endorsed or affiliated with the content, company or spokesperson. We are a 501(c)3 nonprofit and receive no website advertising monies or direct or indirect compensation for any content or other information on any of our websites. For more information, visit our TOS.