Algorithms Don't Make Decisions!

Dr Andrew Dwyer

 
 
johnny-cohen-_vMb6AeYuzE-unsplash.jpg

We are frequently told that algorithms, machine learning and AI make decisions on our behalf. In this blog post, I make an argument for a different claim: that computation can never make a decision, and I reposition how algorithmic choices are political and condition decision as a distinctly human practice.

Algorithms permeate many aspects of our lives; from what is displayed on social media to who is deemed suspicious at the border. Much work has sought to understand how learning data and algorithmic practice lead to decision-like outcomes, and in the process, expose their discriminatory and  sometimes violent impacts (Benjamin, 2019; Noble, 2018). However, I argue that we must also adopt a greater nuance beyond ‘biased’ learning data to more acutely attend to thresholds, processual decision, and computational choice.

During an (auto)ethnography of a malware analysis laboratory, I increasingly sought to conceptualise computational agency in response to the inadequacies I found in talking of ‘decision’. To do so, I used the work of the American theorist, N. Katherine Hayles. In her book, Unthought: The Power of the Cognitive Nonconscious (2017), she claims that the world is split between ‘cognizers’ and ‘noncognizers’. The former includes humans, animals, plants, and importantly, electronic, digital computation[1], that have a capacity to process signs and act upon these; thereby interpreting and making a choice based on that interpretation[2]. This can be limited – such as a simple yes or no, but one that cannot be wholly known in advance. This is different to noncognizers, such as rocks or ocean waves, that do not interpret signs but are instead compelled to follow the forces exerted upon them. This may appear to be a detour from discussing algorithms – but I believe it is essential. If computation can make choices, then it has an ability to diverge from human decisions and be a political actor. Algorithms do not just linearly extend decisions made by us but are active negotiators. We thus arrive at a complex crossroads – algorithmic architectures may derive from a decision (through their design or learning data), but they also reconfigure, through their interpretation and choices, something distinct that is not equivalent to decision either.  

Today’s algorithmic choices are most advanced through ‘high-dimensional’ spaces that construct relationships between data. Machine learning algorithms – such as neural networks – permit a greater recursivity that folds prior computational choices upon one another leading to highly complex outcomes that arrive on a different plane of recognition to us. This is permitted through the greater abstraction from computational hardware, allowing the capacity for choice to grow. Thus, I do not regard machine learning algorithms as some new arbiter of decision, but a continuation of the already present potential for choice, that we are only recently beginning to recognise as ‘intelligent’. In this world, it is not clear where human decision (and associated intent and responsibility) can be reasonably ‘placed’ or whether this can hold when computational choice becomes ever more interwoven in the practices of everyday life.

So, why do I not ascribe algorithmic recursivity equivalence to (human) reflexivity, which I argue is a condition for decision? First, algorithms process and (re)cognise the world differently to us – through the calculative – rather than an affective, reflexive, embodied experience that we do[3]. The French philosopher, Jacques Derrida, articulates how calculative modes cannot render decision:

“[A] decision does not simply consist in its final form... It begins, it ought to begin, by right or in principle, with the initiative of learning, reading, understanding, interpreting the rule, and even in calculating. For if calculation is calculation, the decision to calculate is not of the order of the calculable, and must not be.” (Derrida, 1992, p. 24, emphasis added)

The decision must go “through the ordeal of the undecidable” otherwise “it would only be the programmable application or unfolding of a calculable process.” Here, calculation can form part of a decision, but is not equivalent to it. If we accept that algorithms process the world calculatedly, forming different senses of normality, then there is no possibility for decision as a reflexive order of choice practised by computation. Therefore, we can maintain my claim that algorithms don’t make decisions.

This means we can have a more honest conversation about algorithms. As computational choice will become more extensive, the decision to implement, the degree of choice afforded to algorithms, and how their thresholds[4] are set will require further interrogation. The biases of learning data require further analysis – but this is not enough – as choices extend away, challenge, and intertwine with, human decision. Computation’s choices are political. Thus, there is something more unsettling at work. We cannot delegate decision outwards, but neither can we wholly attribute how an algorithm chooses or transforms our decisions.

We should not reduce our interpretations to those advocated by certain elements of the ‘AI Ethics’ debate – that overly focus on ‘ethics’ as though these can be imbibed within algorithms. This is folly: algorithms make choices that exceed our ethical notions and recognise the world differently. Perhaps more crucially, as Louise Amoore details, “it is not the case that algorithms bring new problems of opacity and partiality but precisely that they illuminate the already present problem of locating a clear-sighted account in a knowable human subject” (Amoore, 2019, p. 150). As much as computational choice may be opaque, our decisions are never wholly our own, they are already always partial. We have a great reflexive capacity, but this does not mean I claim decision is better per se. However, our ethical notions derive from our (re)cognition of the world, based on our (often contested) normals, meaning we cannot simply outsource or ‘build in’ our socially negotiated normals to a calculative mode. I do not reserve decision for humans out of superiority but because of its different make-up, its capacity for reflexive ethical engagement, and thus a responsibility to one another.

Decision cannot, and must not, be reduced to the order of the calculative.

[1] I refer to current von Neumann computational architectures based on electronic, digital binary, in order to distinguish between different forms of computation. A convincing distinction comes from a blog post by Blake Richards (2018) who writes how (human) brains are also computational.

[2] With computation, Hayles (2019) makes the claim that this is a process known as ‘cybersemiosis’ as a computational form of sign-exchange and choice.

[3] This is, no doubt, a radical claim for some. But I do not think this is something in extremis – in that we, for instance, build up our (re)cognition through different matter and relations. Thus, I do not think there is a radical discontinuity between humans and other cognizers, but a degree of difference, which results in differing forms of choice (of which decision is particularly human high-level, abstracted, form of choice).

[4] This approach is indebted to the work of Louise Amoore (2016, 2018, 2019) on how thresholds, particularly in neural network algorithms, lead to outputs between the value of 0 and 1, where thresholds must be set in order to render something to be actionable.

Bibliography

Amoore, L. (2016). Cloud geographies: Computing, data, sovereignty. Progress in Human Geography. https://doi.org/10.1177/0309132516662147

Amoore, L. (2018, May). Aggression and the harms of the algorithm. Presented at the Aggressive Architectures Workshop, School of Geography and the Environment, University of Oxford. Retrieved from https://web.archive.org/web/20180426095203/http://www.geog.ox.ac.uk/events/180510-aggressivearchitectures-iklinkeorg.html

Amoore, L. (2019). Doubt and the Algorithm: On the Partial Accounts of Machine Learning. Theory, Culture & Society, 36(6), 147–169. https://doi.org/10.1177/0263276419851846

Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK: Polity Press.

Derrida, J. (1992). Force of Law: ‘The Mystical Foundation of Authority’. In D. Cornell, M. Rosenfeld, & D. Carlson (Eds.), & M. Quaintance (Trans.), Deconstruction and the Possibility of Justice (pp. 3–67). New York: Routledge. WorldCat.org.

Hayles, N. K. (2017). Unthought: The Power of the Cognitive Nonconscious. Chicago: University of Chicago Press.

Hayles, N. K. (2019). Can Computers Create Meanings? A Cyber/Bio/Semiotic Perspective. Critical Inquiry, 46(1), 32–55. https://doi.org/10.1086/705303

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press.

Richards, B. (2018, October 1). Yes, the brain is a computer…. Retrieved 1 March 2020, from Medium website: https://web.archive.org/web/20200301171307/https://medium.com/the-spike/yes-the-brain-is-a-computer-11f630cad736