Automated Decision-Making — Good or Bad?

I used to think that automated decision-making removes prejudices, but maybe I was completely wrong.

Violy Purnamasari
7 min readMay 2, 2021
Photo by Markus Spiske on Unsplash

Technology is the way to go. I wouldn’t doubt that at all. Can we imagine a world where we are back to pigeon post and not WhatsApp? I genuinely can’t. Technology has changed this world for the better. Well, for now. We think and we believe that technology is better. In the context of social development, we believe that by bringing technology into place, we will be able to reduce prejudice and biases. Using technology in making decisions reduces human error and subjectivity. Using technology contributes to innovative solutions and improves transparency in decision-making. Therefore, technology is capable to accomplish what a human being is not able to.

What I want to write here today though, is a paradox. As much as technology has changed our lives, there are certain caveats that, if we don’t take good care of it, can make our lives miserable in the future. Technology at the end of the day is not perfect (hey, remember that nothing is perfect?). It starts to become an issue when we think that technology will solve all of our problems — when we think that technology is THE solution.

“Technology is not perfect.”

Disclaimer: I am not a technology expert. I have read a couple of fantastic books and have attended great lectures in the past. This writing is mainly inspired by the book “Automating Inequality “ by Virginia Eubanks. If you think you like the topic I wrote here, you are bound to enjoy her book!

Photo by Lorenzo Herrera on Unsplash

Technology that automates decision-making

Your calculator is a technology. Your watch is a technology. Your zipper is a technology. So, what I meant by technology? The technology I am talking about here is those that help us make a decision. The technology that is built to reduce our decision-making needs. The technology that processes data through the algorithm.

Algorithms are found in the computing code, our smartphone, GPS, social media, our financial credit scoring to law enforcement system. The usage is widespread, of which most of the time we barely realise that we used it. An algorithm in its simplest term refers to an instruction to solve a problem. In the computing world, it is a set of mathematical instructions that specify certain action to calculate and yield certain outcome. What is being processed within that set of instructions is data. Data referred to a set of values, which can be qualitative or quantitative, about certain objects or persons that are inputted into the algorithm to yield results. Data can be collected through various ways, such as observation, records, experiments and others. Once collected, data can then be measured and analysed, normally with the help of an algorithm, to determine and discover a pattern that can help with decision-making process.

Okay, so what?

Our relationship with technology is complicated. Technology by itself is not the bad guy. But, it is how we make use of it that makes it complicated.

The big problem happens when most of us think that whatever decision made from the mighty technology is objective. It is the best one and it is unarguable. This is the tricky part. It is not and it will never be. Nothing is perfect, the same goes with technology. We should not assume one. We should acknowledge that the decision made by the assistance of technology is just slightly better and more efficient, but it will never be the perfect one.

Photo by Sai Kiran Anagani on Unsplash

Automating decision making does not make it less subjective.

This is one of the common misunderstandings when moving various decision-making process into the realm of an algorithm. Automated decision making is not without bias. Implementing technology in an unequal and prejudiced society without careful consideration will only reinforcing the social inequalities and creating social reproduction within the society.

The formulation of an algorithm is highly dependent on two things:

  1. the data
  2. the value system of the person who formulates it.

When the data is skewed toward certain groups of people or discriminate against certain types of people, there is no way that the result churned out from such data is unbiased — it is soooo important not to label such algorithm as being objective. Furthermore, the belief system that the person who formulates the algorithm is also indirectly influencing the way the algorithm is developed. Even when they strive for inclusiveness, objectivity and neutrality — what is built into their creations are their own perspectives and values. Hence, most often than not, the algorithm will inherit the pre-existing prejudice of the prior decision-maker. Instead of making it more objectives, it reproduces the existing pattern of inequality.

Even when algorithm is completely value-neutral, the way data is collected, the method of analysing it, as well as the interpretation of the results, are all fundamentally social process that takes places within pre-existing social, cultural, and political system. Data to which algorithms are applied have their limits and deficiencies too. Even datasets with billions of pieces of information do not capture the fullness of people’s lives and the diversity of their experiences. Moreover, the datasets themselves are imperfect because they do not contain inputs from everyone or a representative sample of everyone.

Hence, when new technology is implemented within an old organisational structure, the enduring problem such as racial prejudice, for example, will not evaporate but will materialise and shape into the new contours of technology. Algorithm creates a new mask for inequality and prejudice to hide from — under the cloak of evidence-based objectivity and infallibility. It leads us to believe that automated decision making contains fewer prejudices than those made by humans. But history has proven that decisions made by algorithm act a lot like the older system. The assumption and preconception are encoded neatly into the system. Algorithm instead has replaced the infrequent biased decision making of a human being with systematic and logical discrimination of a technology.

Bias is within the algorithm. Building an algorithm from a place of inequality will inevitably reinforce inequalities.

In another word, algorithm is making discrimination more efficient.

“Models are opinions embedded in mathematics. “ — Cathy O’Neil

Photo by Chris Liverani on Unsplash

Are human just statistical figures?

The extensive usage of the algorithm has removed the complexity and context of a social system. When people are reduced to numbers and statistical figures, we lose sights of the hows and whys behind every action in favour of measurable behaviour and outcomes. It is argued here that the complexity of the social system might not be able to be confined within the boundaries of data. It is dehumanising to sum up our whole life in a single number.

Causality observed in big data pattern, for instances, could show a median trend but does not tell the whole story. When we start looking at people behaviour as just a graph, we start to label people as number and generalise the problem. There is a need to be cautious about concluding causality. When we look at social problems as a mathematical equation, we see outliers as not important. This is concerning because we see other people as less valuable than others. But hey, that single outlier is still a human being. We need to remember that we design technology not for data points, probabilities or pattern, but for human beings.

“You can’t build any algorithm that can handle as many variables, and levels of nuance, and complexity as human beings present.” — Gary Blasi

Photo by Brett Jordan on Unsplash

Technology is not the issue

To say that algorithm is imperfect is not to say that they are undesirable. The algorithm has empowered people as much as it has marginalised some. It comes down to how we used that technology and how we interpret the results. It is also highly dependent on the institutional context, political and societal structure in which the technology is being used. It is a mere fantasy to think that an algorithm will magically remove prejudices and resolves racial issues within our culture, policies, and institutions. It is indeed just a fantasy.

What is important is to remember that technology simply assisting decision making and it doesn’t come close to perfection. It will never be fully objective nor foolproof.

Technology is crucial, it has the power of empowerment but it also has the ability to legitimise discrimination. An algorithm is merely a tool — it depends on who, how, and what we use it for.

--

--

Violy Purnamasari

Cambridge graduate | Trying to make this world a slightly better place