The Political Affinities of AI

The article discusses the political implications of Artificial Intelligence (AI), arguing that AI acts as a political technology with its issues of bias treated as questions of fairness. It highlights that the technical operations of neural networks can reinforce or extend political currents, and their calculative categorisations can trigger chains of decisions with real consequences. McQuillan suggests that AI is becoming the unifying logic across corporations and governments, and criticises it for encouraging thoughtlessness, as described by Hannah Arendt. Lastly, the article calls for reclaiming agency not by attacking AI, but by challenging the system that produces AI in its own image.

Highlights

(McQuillan, 2019, p. 1)

We need a radical politics of AI, that is, a politics of artificial neural networks. AI acts as political technology, but current efforts to characterise it take the form of liberal statements about ethics. Issues of bias in AI are treated as questions of fairness, as if society is already a level playing field that just needs to be maintained. Transparency and accountability are seen as sufficient to correct AI problematics (ACM FAT 2019), as if it is being introduced into well-functioning and genuinely democratic polities

(McQuillan, 2019, p. 1)

The vast parallel iterations carried out by backpropagation cast an opacity over AI by making its optimisations very hard to reverse to human reasoning (Lipton 2016). Algorithmic judgements that affect important social and political decisions are thus removed from discourse.

(McQuillan, 2019, p. 2)

how do the concrete technical operations of neural networks reinforce, enforce or extend these political currents, and how (if at all) they might instead serve the goals of social justice

(McQuillan, 2019, p. 2)

Neural networks are neoplatonic; they claim a hidden mathematical order in the world that is superior to direct experience (McQuillan 2017).

(McQuillan, 2019, p. 2)

Its calculative categorisations trigger chains of machine and human decisions with real consequences, involving the allocation or removal of resources or opportunities. Embedded in deep learning, obfuscated from due process or discourse, these numerical judgements have a law-like force without being of the law.

(McQuillan, 2019, p. 3)

AI is poised to become the unifying logic of legitimation across corporations and government.

(McQuillan, 2019, p. 3)

existing instances like Amazon Web Services are increasingly indistinguishable from critical national infrastructure (

(McQuillan, 2019, p. 3)

Bourdieu’s habitus; structured structures predisposed to function as structuring structures (Bourdieu 1990).

(McQuillan, 2019, p. 3)

AI encourages thoughtlessness in the sense described by Hannah Arendt; the inability to critique instructions, the lack of ref lection on consequences, a commitment to the belief that a correct ordering is being carried out (Arendt 2006).

(McQuillan, 2019, p. 4)

No neural network has any understanding of anything, in the form of an abstract model or ontology that can be freely applied to novel situations. That is, neural networks are incapable of exactly the kind of adaptive and analogical thinking that characterises even young children.

(McQuillan, 2019, p. 5)

the mere addition paradox shows that, if optimising on a social welfare function over any population of happy people, there exists a much larger population with miserable lives that is ‘better’ (more optimised for total wellbeing) than the happy population (Eckersley 2018).

(McQuillan, 2019, p. 6) “the tyranny of the probable”

(McQuillan, 2019, p. 7):

Ivan Illich, in his call for convivial technology, proposed ‘counterfoil research’ whose goal is to detect “the incipient stages of murderous logic in a tool” (Illich 1975) where a tool, for Illich, means a specific combination of technologies and institutions.

(McQuillan, 2019, p. 8):

A new Luddism is one way to characterise attacks on self-driving vans by residents in Arizona, fed up of the way Waymo is testing its autonomous AI in their communities and on the streets where their children are playing. Deep learning has proved again what the radicals of the 1970s claimed, that the domain of production has extended to everyday life, and that we live in the ‘social factory’ (Cuninghame 2015).

(McQuillan, 2019, p. 8):

Reclaiming our own agency is not to attack AI as such but to challenge the system that produces AI in its own image. This is what it means to take sides with the possible against the probable.

McQuillan, D. (2019). The Political Affinities of AI. In The Democratization of Artificial Intelligence: Net Politics in the Era of Learning Algorithms (pp. 163–173). transcript Verlag.