top of page

ETHICS OF AI - Opacity of AI Systems

Updated: Jan 21, 2022

Opacity of Artificial Intelligence (AI) Systems, or otherwise referred to as transparency of AI systems, is the concept that what goes on inside AI algorithms, how they arrive at their conclusions, is unknown — opaque — to those using the AI, but often to the creators themselves.

This lack of transparency exists for multiple reasons — human cognitive abilities just simply cannot comprehend the massive, complex algorithmic models and datasets; the inherent nature of self-learning algorithms whose decision logic is constantly evolving without human input; a lack of appropriate visualization tools for mammoth-sized code and datasets; poorly structured code and data which effectively renders them impossible to read, etc.

This is important because bias in decision-making AI is compounded by this opacity. If bias cannot be understood due to the opacity of an AI system, an opacity even experts cannot penetrate, then bias simply cannot be controlled.

If bias cannot be controlled, then do we want to trust AI to make decisions?

This quote by Vincent C. Müller in his paper for the Stanford Encyclopedia of Philosophy clearly sums it up:

“The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions...
...In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere.”

This article would not be complete without addressing a notion that is against the call for transparent AI. There are some researchers that are concerned that forcing transparency could diminish innovation and divert resources away from safety, performance and accuracy advancements. There are others who claim transparency in AI would allow experts, or digitally literate users, to rig systems, thus destroying the ‘neutrality’ of non-human decision making.

Do you have an opinion about transparency in AI? We’d love to hear it. Leave us a comment below, send us an email to or use our contact form.


bottom of page