Negative Space

Have you ever noticed that after you browse the internet for a holiday for new clothes, you’ll get chased by ads on other websites of similar or even the same products? It can make you feel like someone is watching you. Of course, this isn’t the case. These websites have constructed a user’s profile of you, based on your previous browsing history, and now show you personalized ads. This can be very convenient. Nobody enjoys digging through a pile of stuff they don’t like. If algorithms point me towards the things I like best, that’s a win. But is there a trade-off for this convenience?

The types of ads you see are determined by algorithms, a buzzword you hear everywhere these days. However, few people actually know what they are. An algorithm is a set of consecutive orders that lead to a certain goal. It takes in information to determine what choice it should make. This information can be collected directly or inserted into a prediction model. The models used in, for instance, marketing are often statistically very complex. They use the information from millions of users in order to derive correlations, to make predictions such as when you buy these shoes, you will probably like those pants. The algorithm takes in this information and then makes a decision, for instance, which kind of ads to show you to have the maximum effect. In this way, online marketers have revolutionized the world of micro-targeting, which is advertising at a personalized level. Thanks to the Internet and the exponential growth of computing power, data about individuals are analyzed in such detail that has never been achieved before in history. Algorithms are getting better at knowing you, as more data is ‘mined,’ and models are improved. Predictions are getting more accurate and therefore more intimate.

What if the algorithm knows things about you don’t want others to know anything about? For instance, American supermarket chain Target uses a system in which a customer’s purchases are tied to a user’s account. Based on previous purchases, algorithms determine what coupons should be sent to your home. When a dad of a teenage girl found out she received coupons for baby accessories, he angrily went to the supermarket. His daughter was still in high school, how dare they send these to her? Later, he found out that his daughter was indeed pregnant. Indeed, the change in her purchasing pattern had triggered the algorithm, which predicted the pregnancy before her dad knew about it. This example shows that the data we unconsciously give away tells more about us than we realize. We often think we are quite unique, and thus a general algorithm can’t say much about us. However, when data of millions of individuals is gathered and processed, chances are there are many cases of people that are in the same circumstances as you. In this instance, teen pregnancies might not be common in your direct circle but are seen often in all of Target’s customer database. With the massive amount of data and the statistical means to process them, even the slightest change in behavior is enough to make a remarkable difference for the algorithm. This way, algorithms may someday know us better than we know ourselves.

Now, what is free choice? When you make a decision, does it always feel as if you made it completely consciously, unimpeded by outside factors? It can be hard to swallow, but our free will is probably more limited than we would like to admit. It is quite easy to manipulate people into doing something they do not consciously want to. This sounds really evil, but it happens all around us. Take the look at how a supermarket is arranged, how shopping carts are getting bigger and how the smell of bread is artificially pumped into the store to make you feel hungry. All this is done so you’ll buy more than you initially wanted to.

If you think about it, we function as algorithms as well. We perceive, which is the data input, and base our decisions on that data. People’s behavior can, therefore, be manipulated by controlling this input. If the data input is the smell of freshly baked bread, you’ll feel hungry and want to buy more. If the data input is the sight of an aggressive dog attacking you, you’ll run. The big difference between a human algorithm and a digital algorithm is that data algorithm itself. Both, however, are just as dependent on the input data to function. If you don’t perceive the dog attacking you, you will not react. When you understand an algorithm well enough, you’ll know what inputs trigger a certain action. You now only need to control the inputs and you can manipulate the algorithm into doing anything, whether a digital or a human one.

An area where this has become particularly influential is politics, especially in how voters receive their news. During the last US presidential elections, the split between Republicans and Democrats was wider than ever. You probably recognize how you can’t even imagine how the other side can even believe such insane things. Are they crazy? The answer that’s often given is that the American population is split in half, each with their own new sources. Therefore, they perceive the world in a completely different way. One side sees Donald Trump as a savvy businessman who wants to save the working class and keep criminal immigrants out of the country, the other side sees him as a sexist, racist crook that’s completely incompetent for office.

As algorithms get better at predicting political behavior, political parties can now micro-target voters. The messages they receive shape their perception of the political world in the most effective way. And as the ‘filter bubble’ now only shows these messages, their worldview is enforced again and again. They will believe they have made their choice for Trump or Clinton voluntarily. Yet the information they have received, which often includes fake news, was solely aimed to get them to make that choice, and the conflicting information was filtered out. If others decide what inputs to put into your algorithm and you decide upon these inputs, is that free choice? The troubling thing about the rising use of algorithms is not that they force us to do things we don’t want, but they make us want to do things we might not have wanted otherwise. When the algorithm decides what we see and knows how we work, it’s only for the owner of the algorithm to decide what we should do.

But does this all matter? You get fitting clothing advice, more suitable partner advice and a tailor-made political orientation. And these all fit you better than the choices you would have made without the help of digital algorithms since they know you so well. The age of worrying about scarcity is almost over, those decisions are bound to be made for us. And the best thing about it is, you’ll still believe you’ve made those choices consciously. You don’t know about all the things that are kept from you. There is no need for contemplating whether your beliefs are really right because all information you receive is tailored to reinforce those very beliefs. Whether we go to a society led by digital algorithms is a choice. We currently stand at a crossroad to our future, so choose wisely. And remember; the easiest way is often not the best one.