In many areas of decision making, the managerial decisions previously made by humans are being partially or fully based on algorithmic outcomes. The claim is that machine learning algorithms based on the patterns in the training data can help in making better decisions based on the high accuracy of algorithmic predictions.
It’s only natural that this trend has called for direct comparisons between the various aspects of human decisions and algorithmic decisions.
There are many scholarly debates about the inherent bias and explainability of both. Very often we have higher expectations of the transparency from machines than from humans' decision makers.
These double standards are called into question by many scholars and then there are others who support them. A simple practical explanation from the human decision maker will often be enough for users to adopt their final decision without necessarily understanding the cognitive processes behind the making of the decision.
It’s not the same for algorithms such as Deep Neural Networks which cannot pass the black box objection since users cannot understand the complex cognitive processes behind the algorithmic outcomes despite having a high accuracy.
So, while its ok to accept a human decision without understanding the intuition and hunch which might have played a big role in the final result, we are not ok to accept an opaque machine decision without fully understanding the process in entirety.
As a user of decision algorithms do you believe you have higher expectations of explainability from algorithms than what you expect from the decisions made by your fellow colleagues?
In my next post I will talk about some reasons why the double standards of transparency might be justified for certain classes of algorithmic decision making.
Photo by Jac Alexandru on Unsplash
Comments