ETHICS OF AI - Bias in Decision Systems

Updated: Jan 21

Bias — a human condition manifesting in Artificial Intelligence (AI) decision systems. Bias results when a “judgement is influenced by a characteristic that is actually irrelevant to the matter at hand (Stanford Encyclopedia of Philosophy).” Bias is often related to discrimination amongst specific groups of people.

Unfortunately, bias has been found in certain AI decision systems. A notable example was Amazon’s automated AI system used for recruitment, which they stopped using in 2017. Amazon had a history of discriminating against women in their hiring process, so when their hiring data that spanned over ten years was used to create their AI algorithms, this bias was then transmitted to the AI system.

Another way bias can appear in machine learning is called statistical bias. A dataset can only be unbiased for one specific issue. If this dataset is then used for other issues, maybe because a company wants to save money on datasets or because they lack the necessary level of expertise, the dataset risks being biased for any of those issues it was not intended for.

There is a lot of awareness around bias and it’s well-documented. Scientists are actively working on detecting and removing bias from AI decision systems, though these efforts are still in the early stages.

As the old adage goes, awareness is the first step to solving a problem. And here at ALTURIS, we are part of the solution.

If you’re interested in learning more about how we address bias in AI, feel free to contact us at or use our contact form.

28 views0 comments

Recent Posts

See All