In its January 2024 AI forecast, AI think tank and Forbes contributor Cognitive World declared that, according to Google Trends, “even with all of the hype that AI has gotten in the past decade or so,” internet searches around the topic have “positively exploded in the search interest in the past twelve months” (Schmelzer 2024). Notably, Schmelzer indicates that despite (generative) AI’s persistent growth across old and new software and applications alike, its integration is “still very sluggish . . . in enterprise and government applications,” and this sentiment is evident in other Google sectors. A recent Google search of “AI and the military” of my own, for instance, yielded the following “Top stories”:
According to Google’s Publisher Center’s help page, “Top stories” is an algorithmically-generated section that appears within a given Google search whenever Google detects that a search is news-oriented. And as this screengrab demonstrates, major news coverage of AI currently suggests that the adoption of these technologies are progressively expanding from the general-purpose to militaristic. While this may be true for ChatGPT and other recently developed large language models (LLMs), Kate Crawford’s The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence argues that both the history and field of AI have been intrinsically tied to the military as early as the 1950s, and Crawford’s sixth chapter takes up this deep-rooted relationship power more explicitly. In doing so, this chapter asks: how do the logics and tools of AI move between and within military, civilian, and tech spheres, and in what ways do AI’s “twin moves of abstraction and extraction” explored throughout the text govern the myriad of relationships between all three (p. 18)?
Though Crawford cites Dartmouth’s 1956 AI Summer Research Project—which was partially funded by the Office of Naval Research—as (one of) the first financial links between AI research and the U.S. military, the chapter’s historical account of these ties largely focuses on events of the 2010s.
For instance, after outlining the Edward Snowden archive, Crawford delves into former U.S. Secretary of Defense Ash Carter (2015–2017) and his connections to Silicon Valley through what he called the “Third Offset Strategy.” Whereas the First and Second Offset generally indicates the U.S. 1950s adoption of nuclear weapons and the development of computer-led weaponry and reconnaissance respectively, the Third Offset entails the military infiltration and “[exploitation of] all advances in artificial intelligence and autonomy” that the commercial tech center presumably had to offer (p. 188).
Crawford foregrounds the 2017 contractual relationship between Google and Project Maven, the Department of Defense’s Algorithmic Warfare Cross-Functional Team, as one of the first examples—and failures—of the Third Offset strategy. Though the contract between entities guaranteed Project Maven access to Google’s object recognition technologies, the extent of the project’s direct relation to warfare was kept from Google employees. It was only after over 3,000 employees signed a letter in protest that Google severed ties with Project Maven. Though this fallout led to Google producing its own code of Artificial Intelligence Principles (for the public) on the one hand, it also induced an increased need to ‘prove’ the company’s American-driven principles (to the state) on the other. Google has thus continued to work with the U.S. Defense Department on military work that, contractually speaking, falls outside of weapon production.
The significance of this particular case study is twofold. Thematically, Google’s involvement with Project Maven best represents the progressive shifts in both corporate objectives and AI discourse: instead of focusing on whether AI should be used in warfare at all, Crawford argues, current debates across sectors are instead distracted by matters of technical accuracy and classification as tech companies continue to integrate themselves with the nation-state.
At the level of the text, Google’s inner workings constitute one of Crawford’s most consistent structural throughlines: from the proximity of its head office to NASA’s Moffett Federal Airfield (Chapter One: Earth) and its technological attempt to control labor and private time (Chapter Two: Labor), to its attempts to at once “articulate [U.S.] patriotism as policy” (p. 192) and extend extractive mining practices into the solar system (Coda: Space). Google’s vertical trajectories—from Earth’s interior surface to space, from corporate to state to public logics—thus serve as Crawford’s example par excellence, rendering AI as an all-encompassing, dominant, and “fundamentally political” atlas that (in)visibly determines how the world itself is measured, defined, and controlled. (pp. 9–11).
To further solidify this point, the rest of Chapter Six zooms in on what Palantir cofounder Peter Thiel calls the “in-between space,” wherein commercially-produced, military-styled tools are made available to private and entities for a price (p. 193). Palantir’s machine learning systems predominantly extract data and aid in the evaluation of people and assets. Although tech produced by Pantir, Vigilant Solutions, and other companies was originally produced with the NSA, ICE, and other national agencies in mind, these software have since infiltrated into the local supermarkets and police stations alongside other forms of big data surveillance that are, according to Crawford, “extralegal by design” (p. 185). Consequently, individuals who have been historically subjected to a higher level of surveillance are surveilled even more, and “inequity is not only deepened but tech-washed” (198).
While Palantir’s data-collecting phone app, Vigilant’s automatic license-plate recognition (ALPR) cameras, and other technologies certainly point to these inequities, these examples primarily lend themselves to Crawford’s account of today’s unique “de facto privatization of public surveillance,” wherein state entities merely buy into surveillance systems that they do not directly oversee (200).
To some degree, then, this crucial notion of tech-washed inequity could arguably benefit from further (historical) contextualization and analysis, particularly as it recalls a longer technological tradition of what Simone Browne calls “racializing surveillance.” Browne writes:
To say that racializing surveillance is a technology of social control is not to take this form of surveillance as involving a fixed set of practices that maintain a racial order of things. Instead, it suggests that how things get ordered racially by way of surveillance depends on space and time and is subject to change, but most often upholds negating strategies that first accompanied European colonial expansion and transatlantic slavery that sought to structure social relations and institutions in ways that privilege whiteness.
Simone Browne, Dark Matters: On the Surveillance of Blackness, 17
How do the discourses around AI’s technical accuracy and optimization potentially uphold, or even intentionally distance, users from this longer lineage of technologically-driven inequity? Is tech-washed inequity yet another interaction of racializing surveillance merely dependent on today’s “space and time,” or is something else going on here?
While The Atlas of AI considers similar questions and themes in more detail elsewhere (Browne’s other work, for instance, appears in Chapter Four), this chapter’s extensive attention to its concrete case studies leaves little room for critical or speculative engagement on behalf of the reader; these epistemological routes are instead reserved for the text’s concluding chapter and coda.
That said, by ending this chapter at AI’s very real point of the “in-between,” as that which is situated neither below the nor above but rather simultaneously among various private, state, and public positions, the text productively challenges its readers (particularly those who were previously unfamiliar with these governmental connections) to (re)consider AI’s extensive permeation and relationship with militaristic forces. Though headlines like those highlighted in Google’s Top stories continue to speculate on the AI futures to come, as Crawford reminds us, the age of AI infiltration is, and has been, already here.
Peer Reviewed by Rebecca Stuch
References
Browne, Simone. Dark Matters: On the Surveillance of Blackness (Durham, NC: Duke University Press, 2015).
“Dartmouth Summer Research Project: The Birth of Artificial Intelligence,” History of Data Science, September 30, 2021. https://www.historyofdatascience.com/dartmouth-summer-research-project-the-birth-of-artificial-intelligence/.
Schmelzer, Ron. “Looking Ahead to AI in 2024,” Forbes, January 2, 2024. https://www.forbes.com/sites/cognitiveworld/2024/01/02/looking-ahead-to-ai-in-2024/?sh=1bd61a573a6a.
Simonite, Tom. “3 Years After the Project Maven Uproar, Google Cozies to the Pentagon,” Wired, November 18, 2021. https://www.wired.com/story/3-years-maven-uproar-google-warms-pentagon/.
Tucker, Patrick. “New Google Division Will Take Aim at Pentagon Battle-Network Contracts,” Defense One, June 28, 2022. https://www.defenseone.com/technology/2022/06/new-google-division-will-take-aim-pentagon-battle-network-contracts/368691/.
Vincent, James. “Artificial Intelligence is Going to Supercharge Surveillance,” The Verge, January 23, 2018. https://www.theverge.com/2018/1/23/16907238/artificial-intelligence-surveillance-cameras-security.
Woodman, Spencer. “Palantir Provides the Engine for Donald Trump’s Deportation Machine,” The Intercept, March 2, 2017, https://theintercept.com/2017/03/02/palantir-provides-the-engine-for-donald-trumps-deportation-machine/.