Illustration by Somnath Bhatt
How does AI reproduce and reinvent old nationalist projects?
A guest post by Kerry Mackereth [now Kerry McInerney]. Kerry is a Christina Gaw Postdoctoral Research Associate at the University of Cambridge Centre for Gender Studies, and a member of the Gender & Technology Research Project team. Twitter: @KerryMackereth [now @KerryAMcInerney]
This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.
This essay examines ‘AI nationalism,’ focusing on the rhetoric of the AI arms race between China and the United States. In critiquing this discourse, I do not aim to defend, in any way, the AI policies of the Chinese state, especially in light of the Chinese state’s violent persecution of Uyghurs alongside other ethnic and religious minorities. Instead, I argue that the critical AI community must reckon with the imbrication of AI with nationalism. Activists and scholars have problematised the violent deployment of AI technologies within national borders and in the context of border control. Yet, we must consider how these critiques are frequently subsumed within nationalist discourses of security and geopolitical competition. A revitalised understanding of AI nationalism is thus essential for a new AI lexicon that foregrounds the power relations that shape the field of AI.
In this piece, I highlight the racialised and imperialist foundations of AI nationalism, exploring how AI nationalism builds upon previous histories of racial capitalism and notions of intractable civilisational difference. I specifically explore how the racialised affective relations of the ‘Yellow Peril’ shape the AI nationalist discourse around the relationship between the United States and China (Tchen and Yeats 2014; Frayling 2014). Nonetheless, the racialised form of AI nationalism I interrogate here is only one example of how AI reanimates old nationalist projects, and how nationalism is appropriated by technological agendas. Hence, I invite others to bring their own reflections on the many facets of AI nationalist policies and discourse.
The United States’ National Security Commission on Artificial Intelligence (NSCAI) frames AI development as a ‘broader technology competition’ that will determine the fate of the United States in the ‘AI era’ (NSCAI 2021). It states that ‘we must win the AI competition that is intensifying strategic competition with China,’ and that ‘China’s plans, resources, and progress should concern all Americans’ (NSCAI 2021). The NSCAI report epitomises AI nationalism, or ‘a new kind of geopolitics’ that connects geopolitical supremacy to AI development (Hogarth 2017).
The primary manifestation of AI nationalism is the intensifying rhetoric of the AI ‘arms race,’ which pitches AI development as a zero-sum game where the victor will not only control the most advanced AI technology, but also enjoy economic, political, and military dominance over all other nations (ÓhÉigeartaigh et al 2020; Cave & ÓhÉigeartaigh, 2018). A 2017 Pentagon report captures this logic in its insistence that ‘if we allow China access to these same technologies concurrently, then not only may we lose our technological superiority, but we may even be facilitating China’s technological superiority’ (Mozur & Perlez 2017). AI nationalism threatens serious geopolitical and technological upheaval through its ‘winner takes all mentality’ (Cave and ÓhÉigeartaigh 2018: 36). These risks include driving unsafe and irresponsible AI development; the deepening of mutual suspicion and antagonistic relationships between geopolitical competitors; and potentially even the use of other forms of military aggression (such as cyberattacks or the targeting of AI personnel) in the race for AI dominance (Cave and ÓhÉigeartaigh 2018).
Yet, AI nationalism cannot be understood without careful attention to how racism and imperialism underpin the AI arms race. First, the AI arms race is not merely the pursuit of technological expertise, geopolitical dominance, or military power over another nation. Instead, it is a fundamental contest over racial and civilisational superiority, one deeply rooted in previous histories of colonial violence and racial capitalism. As Michael Adas notes, the West’s self-perception of its own technological prowess played a justificatory role in eighteenth and nineteenth century colonialism, becoming a linchpin in Western notions of racial and civilisational superiority (Adas 1990). By the mid-nineteenth century ‘machines and equations, or their absence, were themselves indicators of the level of development a given society had attained’ (Adas 1990: 196). The concept of technological progress was established as one of the key criteria for distinguishing ‘civilized’ cultures from ‘barbarian’ or ‘savage’ ones (Adas 1990: 196).
These histories are especially pressing in the field of AI, given how technological development intersects with perceptions of intelligence. As Stephen Cave notes, the concept of intelligence was central to the construction of racialised and gendered hierarchies, and these value-laden histories of intelligence continue to shape AI development and discourse (Cave, 2020). In this sense, AI nationalism cannot be disaggregated from longer histories of racialisation, where perceptions of technological prowess were used to divide people into the categories of human, partly human, and nonhuman (Weheliye 2014).
Second, AI nationalism draws from and produces notions of entrenched and intractable civilisational difference. ÓhÉigeartaigh et al. (2020) argue that perceived cultural differences between the ‘West’ and the ‘East’ inhibit international co-operation in AI development (ÓhÉigeartaigh et al 2020). While real cultural and political differences matter, the rhetoric of AI nationalism gestures towards the assumed impossibility of civilisational co-existence. Samuel Huntington’s (in)famous essay ‘The Clash of Civilizations’ (1993) posits that unlike ‘political and economic’ conflicts, civilisational clashes are far more difficult to resolve because ‘in conflicts between civilizations, the question is “what are you”? That is a given that cannot be changed’ (Huntingdon 1993: 7). Through the lens of civilisational conflict, China’s mere existence is considered an ontological threat to the West. Its pursuit of AI is similarly framed as a challenge to Western values. For example, the NSCAI report explicitly states that the ‘AI competition is also a values competition,’ framing China’s use of AI as a ‘chilling precedent’ for individual liberty and democratic norms (NSCAI 2021). While concerns over the Chinese government’s use of AI for the purposes of surveillance and repression are urgent and necessary, the NSCAI report entrenches the premise that civilisational conflict between the United States and China is both inevitable and just.
Third, the perceived clash of civilisations between China and the United States derives its affective and rhetorical power from long histories of anti-Asian racialisation in the context of the West. The ‘politics of fear’ (Cave and ÓhÉigeartaigh 2018) that both produce and entrenches the nationalistic rhetoric of the AI arms race must be read through previous histories of racialised affect: specifically, the language and logics of the ‘Yellow Peril.’ The Western figuration of the ‘Yellow Peril’ frames Asian people and Asian nation-states as inherent threats to the sanctity of Western civilisation (Frayling 2014). For example, in 1971, the Webster’s New International Dictionary partially defined the Yellow Peril as ‘a danger to Western Civilization held to arise from expansion of the power and influence of Oriental people’ (Frayling 2014: 10). AI nationalism’s politics of fear sticks so effectively to Asian countries, peoples, and bodies due to a long and deep archive of anti-Asian loathing (Ahmed 2015; Sohn 2008; Tchen & Yeats 2013).
The Yellow Perilism of contemporary AI nationalism reproduces a specific form of dehumanisation that associates Asianness with the machine (Cheng 2019). The extraction of Asiatic labour and the framing of the ‘coolie’ as the ‘ideal labourer’ under imperial capitalism produced and perpetuated the concept of the Asiatic body as machinic, insensitive, and suited to rote, routine labour (Schuller 2018:14). Such ideas continue in the form of contemporary techno-Orientalism in the AI space, where the perception of China as a ‘human factory’ with a ‘vast, subaltern-like labor force’ continues to shape the United States’ queasy oscillation between fetishisation and fear in relation to China’s technological production (Roh et al. 2015: 4).
The ‘Yellow Peril’ is also an inherently gendered set of relations, and the artificial quality of the Asiatic subject is articulated through gendered discourses. Anne Anlin Cheng argues that the framing of the ‘yellow woman’ as an aesthetic and an ornament renders the ‘yellow woman’ especially close to the synthetic, to the extent that she posits, ‘Asiatic femininity has always been prosthetic…She is an (if not the) original cyborg’ (Cheng 2017, 2019). Through the dynamic interplay of gender, racialisation, nationalism, and racial capitalism, Asiatic (in)humanity is articulated with and through the spectre of artificiality (Cheng 2019): an intimacy with the machine that plays out in racialised discourses of AI nationalism.
How, then, do we address AI nationalism? First, we must examine AI nationalism within much longer histories of racialisation, nationalism, and racial capitalism. These interrogations of AI nationalism will offer us a richer understanding of how AI is racialised and how AI itself racialises. These investigations could ask: how does AI produce national, racialised, and ethnic identities? How can critiques of the Chinese government’s racist and genocidal policies avoid the distinctive collapsing of nation-state/people/civilisation that remains so central to the racialised construction of the ‘Yellow Peril’?
Second, we must consider more broadly the implications of nationalist thinking in the field of AI. Interrogations of AI nationalism tend to engage primarily with the language of the AI arms race. However, a narrow focus on the AI arms race obscures how AI bolsters a multitude of nationalist projects, including the use of AI to police and produce national borders. We must ask: how does AI reproduce and reinvent old nationalist projects? How does it reanimate traditional sites and spaces of nationalist violence? Perhaps these lines of questioning will illuminate AI nationalism’s racialised foundations and offer new ways of thinking about the relationship between AI, race, and nationalism.
Many thanks to Luke Strathmann, Amba Kak, Eleanor Drage, and Stephen Cave for their comments and advice on this piece.
References
Adas, M. (1990). Machines as the Measure of Men: Science, Technology and Ideologies of Western Dominance. Ithaca: Cornell University Press.
Ahmed, S. (2015). The Cultural Politics of Emotion. (Second edition.). Edinburgh: Edinburgh University Press.
Cave, S. (2020). ‘The Problem with Intelligence: Its Value-Laden History and the Future of AI’. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 29–35. https://doi.org/10.1145/3375627.3375813
Cave, S., & ÓhÉigeartaigh, S. (2018). An AI Race for Strategic Advantage: Rhetoric and Risks. 36–40. https://doi.org/10.1145/3278721.3278780
Cheng, A. A. (2017). ‘The Ghost in the Ghost’, Los Angeles Review of Books. Retrieved 8 March 2021, from https://lareviewofbooks.org/article/the-ghost-in-the-ghost/
Cheng, A. A. (2019). Ornamentalism. New York: Oxford University Press.
Frayling, C. (2014) The Yellow Peril : Dr Fu Manchu & the Rise of Chinaphobia. London: Thames & Hudson.
Huntingdon, S. P. (1993) ‘The Clash of Civilizations?’ Foreign Affairs 72(3), 22–49
Hogarth, I. (2017). ‘AI Nationalism’ Retrieved 30 March 2021, from https://www.ianhogarth.com/blog/2018/6/13/ai-nationalism
ÓhÉigeartaigh, S. S, Whittlestone, J., Liu, Y., Zeng, Y., and Liu, Z. (2020) ‘Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance’ Philosophy & Technology 33(4): 571–93.
Schuller, K. (2018). The Biopolitics of Feeling: Race, Sex, and Science in the Nineteenth Century. Durham: Duke University Press.
Mozur, P., & Perlez, J. (2017). ‘China Tech Investment Flying Under the Radar, Pentagon Warns’, The New York Times. Retrieved 9 April 2021, from https://www.nytimes.com/2017/04/07/business/china-defense-start-ups-pentagon-technology.html
National Security Commission on Artificial Intelligence (2021) ‘Final Report’. Retrieved 9/4/2021, from https://assets.foleon.com/eu-west-2/uploads-7e3kk3/48187/nscai_full_report_digital.04d6b124173c.pdf
Roh, D.S. (ed.), Betsy Huang (ed.) & Greta A. Niu (ed.) (2015). Techno-Orientalism: Imagining Asia in Speculative Fiction, History, and Media. New Brunswick: Rutgers University Press
Sohn, S.H. (2008). ‘Introduction: Alien/Asian: Imagining the Racialized Future’. Melus, 33(4), 5–22.
Tchen, J. K. W., & Yeats, D. (2013). Yellow Peril!: An Archive of Anti-Asian Fear. London: Verso Books.
Weheliye, A. G. (2014). Habeas Viscus: Racializing Assemblages, Biopolitics, and Black Feminist Theories of the Human. Durham: Duke University Press.