Algorithmic Futures discusses technology and its effects on society..

Algorithmic Trust

What is trust? A question that most likely has been asked by humans since the advent of their existence. In a more general sense, it seems like a naive question. “Trust is everything” is a typical answer, from direct interpersonal connections all the way to trusting that money, printed paper, can be exchanged for food. However, the most common attribute associated with trust is reliability. It is hard to trust someone or something that has erratic or unpredictable behaviour.

From an etymological perspective, trust means “to be firm, be solid,” which relates to the often referred to felt sense of trust that deep rooted instinct of whether someone can be trusted or not. Trust, therefore, seems to have two dimensions, one on the rational side, something we can relate to, evaluate and possibly even measure, and the other instinctive, a deeply rooted intuition. The Germans like to refer to this as Bauchgefühl, a feeling in the belly.

Examples of the former include trusting the financial system, which enables us to pay with paper that, in essence, is completely useless unless backed by a trusted agent, such as a central bank, which in theory should hold gold reserves that form the ultimate measure and base of trust. The latter, the Bauchgefühl related form of trust, is more complex to understand and quantify. Studies point to the shape of faces or voice patterns, the way a person moves or presents themselves, their background, education, whether they have an academic degree, are wearing a white robe and so forth. This type of trust relates more to social complexities and can be observed between humans and also between humans and animals, such as trusting your cat to come home at night or your dog not to attack.

Dividing trust in this way, for the sake of this article, makes sense and allows us to focus more on the part or type of trust that is external or system based and reflect further on Luhmann’s take on trust, which he sees as a way to deal with complexity, hence placing trust more in the camp of survival instincts than a more general concept.

From this perspective, and from viewing trust through Luhmann’s lens of complexity reduction, several arguments related to the current complexities of life and the advent of algorithmic decision making become interesting. The world, so it seems, and as is often stated, has become more complex. In this article, however, the argument followed is that the world has not become more complex but has rather lost trust in its systems, and partly in the functions running these systems. The reason for its decline is not apparently clear however it might not be a decline but more a shift in trust, away from human lead systems and towards algorithmic systems.

A shift of trust from human based systems towards algorithmic systems is apparently in the example of self driving cars. A self driving car is an algorithmic system that is trusted more than a human driver. This means for this first time in history we are trusting algorithmic decision making over human decision making. This trust shift is a shift from technology that supports human decision making, which most technology was up to know ranging from the hammer to the automotive, towards algorithms that make decisions for humans. In turn, this represents a shift of trust away from a human driver towards an algorithm.

This move towards algorithmic trust has been silently emerging for some time, from algorithms recommending content, such as movies and news articles to fully orchestrated news and social media streams. Trust, therefore, to add to the dual view of system based and intuitive trust introduced above, has an intrinsic or implicit and extrinsic or explicit dimension. We might not trust an algorithm to curate a safe content stream for our children; however, we implicitly trust that the social media company owning the social media app is governed by laws which we, in turn, trust explicitly.

An even larger trust shift towards algorithmic trust can be observed in financial markets, where so called fiat currencies, such as the dollar, yen and euro, are slowly but surely being challenged by cryptocurrencies. Crypto currencies underly no national central bank and therefore no human made monetary policy. They are simply encrypted ledgers that are practically unbreakable and ensure that value transactions, or more broadly any digital transactions, are tracked and therefore valid. Moreover algorithmic trading, a mechanism in which the algorithm decides on buying and selling actions, is slowly but surely taking over the trading world. The trust shift in the first example is from trusting a bank towards a decentralized ledger and trusting your broker towards trusting an algorithm.

But it does not end there. Political representation is also being challenged. It is questionable whether we will continue to vote for representatives who debate with others to enact laws reflecting public needs and desires. It is technically possible and entirely feasible to go a more direct route by using online platforms through which the public can vote laws directly into effect. In this case, the middleman, the politician, is no longer needed unless trusted to do a better job than the public.

Examples of this potentially going wrong can be found in the Brexit debate, where it became clear that the implications of Brexit were too complex for the general public to understand. Hence, trust shifted to those who could explain Brexit’s implications in overly simple terms, typically the parties and individuals representing the Leave campaigns, who clearly communicated, for example, how much money would supposedly be saved by leaving the European Union.

It seems, therefore, that we are not currently stuck in a phase of humanity that is overly complex or overwhelming. Rather, we are stuck in a crisis of trust and hence tend to trust more controllable and calculable, that is, measurable systems over erratic and unpredictable humans, who are driven more by emotion than by fact. Inter-human trust, of course, is essential for social interaction but perhaps not that important when one simply wants to get home safely at three in the morning in heavy rain, and when trusting an algorithm more than one’s own abilities to drive a pilot a vehicle.

On the other hand, trust erosion has gathered significant momentum, from not knowing whether images or videos online are real or generated, to not knowing whether one’s savings are safe from cyber attacks, to not knowing whether the economy will withstand a recession. What can I trust becomes an essential question.

Overall, and to conclude this initial discussion, trust, even though seemingly mysterious in its nature and application, can be viewed in very simple terms, such as who is trusted and why. Saying the world has become too complicated might be an oversimplification; the reality might be that we do not trust as much as we used to, and with that lack of trust comes an increased desire to take control or understand things we do not trust. That, however, can be very complicated and overwhelming. Algorithmic trust can be a solution if governed independently, that is, not owned or controlled by entities driven by a paradigm of shareholder value, which in turn would complicate the situation even more. Stakeholder value, that is, democratised trust via independent agents, is possible, feasible and definitely trustworthy.