What is AI? Part 2, with Lucy Suchman | AI Now Salons
Jul 19, 2023
In our second of a two-part grounding conversation on “What is AI?”, Lucy Suchman draws connections between military logics, our own conception of intelligence, and how we map that onto AI as we know it.
How we conceptualize “intelligence” in neural networks and machine learning is crucial for how we understand this technology. AI does relatively well at identifying statistical patterns in closed-world data. But real, open-world environments are notoriously difficult for the field of robotics and machine learning writ large, and as such “engineering” the world these machines inhabit and draw data from becomes almost as important for their proper functioning as the algorithms underlying them. Ultimately, we need to understand the limitations of AI systems, their risks, and acknowledge the aspects of the world that resist automation.
Biography
Lucy Suchman is Professor Emerita at Lancaster University. Her research works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Lucy’s current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.
Highlights:
- Could you tell us about how artificial intelligence in many ways originates from military imperatives? In what ways does its proposed use in defense context trouble some of the underlying claims about what AI is even capable of?
- Lucy: The relations between the military and computing go back to the beginnings of computing as a technical practice, and particularly since World War II, they’ve become more and more closely intertwined. Military logics rest on the premise that you can have control over war fighting. The idea of modern warfare is that wars can be conducted in a just and rational way, and this rests in turn on the idea that command and control has to operate according to a hierarchy where those in command, ideally have a kind of omniscient “God’s eye” overview of the battle space. And there has been a history of technological projects aimed at providing that kind of “God’s eye” view.
- Looking at this from the perspective of the battlefield is that it really foregrounds the messiness of where data comes from because data comes from the world. It comes from the messiness of human life.
- Lucy: The imaginary of what’s referred to in military discourse as ‘real time situational awareness’ presupposes that you have translations from signals, whether those are signals generated through sensors of various kinds, or so-called human intelligence (things picked up from chat, from other kinds of communications). It assumes that there’s some kind of instantaneous translation of those signals into information and into actionable intelligence. But of course, those signals are highly ambiguous. They’re being produced in the midst of the chaos and the horrors of war fighting. And then as the speed intensifies with increased automation of the processing of signals and the making of data, the possibilities for judgment, for deliberation, for assessing the validity of the data, assessing who’s creating the data, where they are coming from, who’s doing the translations basically disappear, and problems of the reliability of the data really intensify. So there’s a whole set of questions for me, not only about what’s going on in the cloud, but also the relation of those data corpora to the worlds from which the data are made and in which the results of that processing have effects.
- How is intelligence being conceptualized in these types of systems or in the models that underlie them, like neural networks or deep learning? And how is this distinct or different to other types of AI systems like robotics?
- Lucy: If we think about so-called neural networks, machine learning, the currently dominant forms of AI, in those cases again, intelligence is basically the name for these computational processes that are detecting statistically significant patterns over a corpus of data. It’s effectively a kind of closed or self-contained world that these systems are running over. Then of course robotics challenges that because robots require action, right? And robotics requires action in the open worlds that we actually live in. But if we think about robotics historically, it’s been successful to the extent that the worlds in which robots operate have been effectively closed. So you could think about the automated assembly line, where the world is engineered for the needs of the robot. Or you could think more recently about the Amazon warehouse, though again, it’s really important to see how crucial humans still are, all the humans that are needed in these kinds of spaces to fill the gaps, to be peripherals for input and output for the machines. In those cases, you’ve effectively engineered the world to make it safe for the robots, and then you incorporate humans to fill the gaps.
Further reading:
- The Myth of Artificial Intelligence, Lucy Suchman and Meredith Whittaker
- Imaginaries of Omniscience: Automating Intelligence in the US Department of Defense, Lucy Suchman
- The Closed World: Computers and the Politics of Discourse in Cold War America, Paul Edwards
- Sorting Things Out: Classification and its Consequences, Geoffrey Bowker and Susan Leigh Star