On Tools and Ploughshares: A Review of “Chapter 7: Power” in Kate Crawford’s “Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence”

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Preface: Approaching This Review

“We are told to focus on the innovative nature of the method rather than on what is primary: the purpose of the thing itself” (Crawford, 2021, p. 214)

Kate Crawford’s (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence takes aim at the hype, unfettered enthusiasm, and air of inevitability that pervades much popular discourse on artificial intelligence (AI) – knocking it down more than a few pegs to its more earthly reality. It does so with an air of seriousness that reflects the gravity of its critiques, and the problems inherent in the world today. Reading Crawford’s text, I found myself drawn into her sustained critique, which perhaps saturates my response and attempts to distinguish myself from her conclusions in this review. After several years of trying to engage with and understand AI systems and the larger tech ecosystem, and hanging out in academic and community practice spaces that often celebrate innovation, entrepreneurship, and the promise of new technologies, it was a pleasure to engage with a much harsher perspective on the world today. As a doctoral student, a social work educator, a burgeoning writer, and a resident of a major metropolitan area, AI seems everywhere, and it seems inescapable. It is refreshing to be reminded that this does not have to be the case. At the same time, Crawford’s book left me wanting more direction towards alternatives to AI hegemony, and skeptical about her outright dismissal of possibilities to mitigate potential harms within AI infrastructures and repurpose AI towards more egalitarian ends.

Today’s world is saturated with artificial intelligence (AI), both aspirational and actually existing technologies. While its potential promises and perils are hotly debated across geopolitical and ideological lines, its status as a technology thrust onto the world stage is not. Just as Mark Fisher (2009) argued that “capitalist realism” was the mantra of the mid-2000’s, Dan McQuillan (2022) argues that today’s moment is one of “AI realism” – where there is no conceivable alternative. Contemporary society is saturated with algorithms (Burrell and Fourcade, 2021) and governed by algorithmic logics (Daston, 2022). Caught up in a race for the latest and the greatest the digital economy can offer, alliances of public, private, and civil society actors advance what urban sociologist Sharon Zukin (2020) refers to as the “innovation complex” – with the forces of global capital aligning to focus on advanced technologies as avenues for the unfettered pursuit to control innovation, amidst a new gilded age of inequality arising in the background. This promises a golden age of scientific discovery as AI systems work to “unlock” novel patterns in massive data sets, and a possible new era of human flourishing. AI development and diffusion further layers out through geopolitical competition, an “AI race” moving forward between powerful industrialized nations such as the United States and China. The push for more and better AI is on. Innovate or be left behind.

The latest fetish object in a long line of technology hype cycles, AI captivates the imagination of technological futures in public and private discursive spheres, academic research agendas, and venture capital investment portfolios. Speculative finance and speculative fiction have a close relationship in AI systems. Yet AI has a long history, one that predates current computational technologies and global capital accumulation patterns. The earliest visions of what we now call “AI” trace back to Leibnitz’s “great instrument of reason” (Gray, 2016). In the 19th century, with slavery’s abolition in the British empire in the backdrop, Charles Babbage imagined an autonomous calculating machine, as a whole host of scientific innovations sought to create new ways to measure, manage, and discipline a growing ‘voluntary’ workforce (Whittaker, 2023). The most recent vision of AI, born in cybernetics and nascent computer science, came out of an all-male conference of researchers at Dartmouth in 1956, and a culture where tedious labor and care work were envisioned as the most automatable (Brousard, 2018). AI’s current wave, born of neural nets, big data, and transformers, is much more recent, owing to a series of scientific and technological discoveries in the early 2000’s (Webb, 2019). For all the grandiose visions that brought them to being, AI systems are little more than advanced computational mathematics – what Alex Hanna (2022) refers to as “mathy maths”. They are calculating, data-crunching machines – to the tune of far more calculations than a single human mind can process, hyper-specializing in narrow tasks. The dream – a hopeful or nightmarish one – of “artificial general intelligence” remains unfulfilled, but on the agendas, mission statements, and business plans of many AI companies today.

AI is also embedded within social and material relations. It cannot exist without the hardware, sociotechnical systems, and infrastructures that support it, and from which it emerged. One cannot understand AI without taking into account the relations of power and domination that it is supported by, and which it reinforces. This is a central argument to the concluding chapter (extending in the coda) of Crawford’s (2021) text. Nearly three years ago, when the book came out, I marked it in my soon-to-grow-out-of-control “to-read” list. In the span of a lifetime, let alone the history of thought, three years is hardly anything. Yet it feels like a long time in terms of technology development and the social discourse around it (certainly, before the term “prompt engineering” entered the public lexicon). Reading Crawford’s text from the perspective of someone who follows the world of emerging technology, and the information overload I often experience just trying to stay afloat of recent trends and developments, challenged me to consider its contributions and impacts to critical scholarship on AI since then. As Crawford’s book demonstrates, refusing this perpetual presentism, a kind of ahistorical teleology of the always immanent emergence of new technology beating to the drum of progress, is not easy. It is also an urgent task of our time.

Crawford maps the flows of AI onto global ecological trends and geopolitical systems, arguing that rather than enhancing humanity and freeing it from toil and poverty, AI systems in their current formations risk further calcifying and intensifying the very things they might promise to deliver humanity from. This approach begs the question of who bears the costs when AI systems fail to deliver on the promises associated with the pedestal they often find themselves placed on. Synthesizing more than a decade of Crawford’s work at the AI Now Institute at NYU, the book hopes to spark public conversation about the consequences and possible directions of AI. The present moment demands this kind of critical questioning. Yet many good and interesting scholarly books about AI have come out in recent years; why pick Crawford’s? What does it add to public conversations about the risks and consequences of AI’s increased role in society and its role in the imaginations of its builders and financiers? What opportunities does it open up for practice and scholarship aimed at building a more just, equitable, and sustainable world? What’s missing from Crawford’s account?

Beyond the haze of enchanted determinism: What is to be done (about artificial intelligence today)?

“AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life” (p. 213)

Crawford’s concluding chapter seeks to peer beyond the haze of “enchanted determinism” and ask who benefits and who loses from AI systems. In essence, enchanted determinism is a kind of mystifying black box. This essentialism also tends to essentialize humanity as a whole, transformed into a mixture of statistical averages (and thus a kind of utilitarian view of AI systems). While many discourses on AI frame it in relation to humanity as a whole, Crawford looks at AI as a planetary system, operating at multiple variegated levels and entry points into global geopolitical orders – namely, late capitalism and the enduring neocolonial legacies of uneven development, resource extraction infrastructures, and intensifying climate crises. She goes at this to address the question of what can be done about AI, and how technosolutionism might be resisted. The chapter’s focus on power brings home Crawford’s argument that ultimately, what is at stake with AI is the power of its dominant narratives, its purported ascendance, and the enduring power found in collective refusals of AI determinism. What these refusals might look like and what alternatives to hegemonic AI imaginaries might be are left, at the end, for the reader to continue building.

Crawford opens her concluding chapter with a comparison of two diagrams showing facets of AI systems, and how the very idea of ‘AI’ is often organized around some sort of visual expression. The first, an abstraction of the AI system AlphaGo’s machinery as a game optimization device, the second is a schematic of the urban infrastructure required to support data servers. Diagrams evoke vision, imagination, and possibilities; they also reveal connections between parts that may not otherwise seem apparent. AlphaGo’s 2016 victory in a five-game match over Lee Sodel, the earth’s reigning Go champion, is often trumpeted as an ‘advance’ in a march towards greater and more powerful AI systems, and a watershed moment at which world leaders turned their attention to AI. The second diagram, a schematic of Google’s first fully owned data system, instead emphasized the dominance the thing takes in its environment and the relations of power that support it – both in terms of spatial arrangement and natural resources such as the massive water reserves it required.  It reveals the infrastructure and resources needed to support ‘the cloud’, a vision captured strongly in Mel Hogan’s (2015, 2017, 2022) extensive work on data centers).

The chapter then winds through a short summary of the book’s various chapter focal points (Earth, Labor, Data, Classification, Affect, and State), pushing its argument that AI is not transcendent, immaterial, or ahistorical, but rather, materially grounded in the present. Deeply embedded within government research funding streams, which often prioritize (and give significant budgets towards) military, private security, and law enforcement purposes, the violence of state and corporate apparatuses of power are inseparable from the contexts in which AI are funded. To Crawford, then, this is where the Atlas leads us – to the infrastructure that supports and long precedes the “long-established business model of extracting value from the commons and avoiding restitution for the lasting damage” (p. 218). This is the raison d’etre of the AI apparatus. In that regard, it is not unlike Heidegger’s (1977) Gestelle, or enframing, as the essence of modern technology, which transforms the world into a “standing reserve”.

Crawford briefly examines the question of if, and how, to democratize AI – to repurpose it for other means than surveillance or control, for peace. I find this section the weakest in the chapter, though the recent debacle with Open AI’s firing, then hiring, of its charismatic CEO Sam Altman seems ample evidence that more broadly, corporations cannot be trusted to regulate themselves, and hybrid governance structures (Open AI is a non-profit with a for-profit subsidiary) are incredibly vulnerable to funder pressure (Ebrahim, 2023). A cliche quote from Audre Lorde (“the master’s tools will never dismantle the master’s house”) dismisses the possibility of alternative uses of AI outright, foreclosing many different ways of thinking about AI. Crawford is highly skeptical of AI ethics discourse and practice, which in the tech world often circulate in parallel, sometimes overlapping, of governance and regulation. She rightly points out that these discourses originate in WEIRD (Western, Educated, Industrial, Rich, and Democratic) societies, where the ‘benefits’ of AI are often perceived while the invisible archipelagos the earlier sections of the book explore remain invisible. Neocolonial resource extraction, an everyday process that sustains metropoles like New York City and San Francisco, is invisible to their residents. Certainly, as well, it is an externality to the calculations of racing to get ahead of AI in public policy and economic development. The consequences of a logic that treats everything as potential data points and value-extractable assets is not unclear. Conditions of possibility for ethical AI might need to start elsewhere, be they grounded in something more reflective of a wider view of humanity, or a broader ontology of more than human relations.

Crawford’s response to AI is to call for counter-power: organized resistance and refusal against AI’s enchanted determinism, and to chart an alternative course of history and humanity. She alludes to this without much of a program, a strategy, or a point, proclaiming that “the calls for labor, climate, and data justice are at their most powerful when they are united” (p. 227) but offers little vision for how to achieve such unity. She offers some glimmers of local, activist-driven initiatives to stop mass surveillance efforts and push for greater accountability among government data systems. Likewise, one could look to mid-2010’s waves of activism among tech workers focusing on cancelling contracts with Immigration and Customs Enforcement and the Israeli Defence Force. We are seeing a resurgence of activism in current tech worker organizing around the war in Gaza, intensified by growing evidence indicating that AI systems used in IDF operations are contributing to mass civilian deaths (Tan et al, 2023; Haskins, 2024; Gedeon and Miller, 2024). Discourse calling for cross-issue unity is not new; various factions of the left and progressive causes have long sought coordination that could lead to effective power and tangible changes in policy, economic structure, and everyday life. Our present moment urges a stronger consideration of how to achieve this.

Counter Discourses on AI: Imagining Otherwise

“Refusal requires rejecting the idea that the same tools that serve capital, militaries, and police are also fit to transform schools, hospitals, cities, and ecologies as though they were value neutral calculators that can be applied everywhere” (p. 227)

Central to Crawford’s argument is the claim that discursive and imaginative power obscures the everyday material realities of AI, which the book covered in greater detail in earlier chapters. A dominant framing of AI in much popular and technology sector discourse is its suprahuman qualities, one that posits it as a form of intelligence beyond human comprehension, limitations, and thus material constraints. If not already realized as such, then its potential to be so is just waiting to be found, or to emerge. It takes on a kind of deterministic narrative as its own agent in history, bringing about its inevitability through a chronology of innovation, revolutions, and progress. Passive acceptance of narratives that mystify AI, be they “doomer” or “accelerationist”, ignore human agency and the possibility of refusal – or of thinking or living otherwise, with or without AI. These frames are “a profoundly ahistorical view that locates power solely within the technology itself” (p. 214). This is part of the ‘power’ of AI, its mystification and the magical thinking embodied in it, a kind of “technological faith” (Johnston, 2021) that forms the foundation of much of industry and mainstream thought about AI, “where technology is assumed, and everything must adapt” (Crawford, 2021, p. 226). Such thinking is deeply embedded in the professional training and dominant ideologies of the engineering field, and the larger Enlightenment project of technical rationality, where technology is always framed in the service of science, reason, and a (fictitious) universal human subject.

Challenging this determinism means asking why or otherwise; it also means asking, quite forcefully, when terms like ‘AI solutions’ are used casually and seemingly without reference to a defined problem – “Solutions to what?” For there are many real and urgent problems humanity and the world face today, and a newly tuned chatGPT model is not going to solve them. Struggles against AI determinism, and for alternative technological choices, are necessary for defending the commons – protecting and defending the earth, the dignity of life, and the value of non-instrumental life. But what that struggle might look like, or signs of such a thing emerging, remains ambiguous. Life outside of computation and datafication, similarly, seems increasingly difficult to imagine. Crawford’s references to refusal, resistance, and the multitude harken to Hardt and Negri’s writings on the alter-globalization movement and the much older intellectual tradition of operaismo that they draw from (Hardt and Negri, 2009). This thinking draws from the long autonomist tradition of valuing refusal and withdrawal of labor and cooperation as a key strategy, locating power in the heart of production and among the workers who keep capitalism alive. But what does it mean to refuse to produce, to participate in, and to perpetuate the many harms of AI systems? What are the levers of power?

Crawford’s argument is a decidedly pessimistic take on AI systems’ limitations and possibilities; it is also optimistic in that she believes humanity can chart a different course than AI’s enchanted determinism leads. But she takes a strong position that AI as it currently exists is too embedded within global power structures to be turned in a different direction, or at least, to be easily repurposed. This materialist account of AI differs significantly from more lofty and idealist discourse on AI’s “alignment problem”, which Weatherby (2023) refers to as “a perfect illustration of ideology”, where language and technologies are framed as value-neutral, and thus tech companies set themselves up to both define and solve the scope of problems around AI. In taking a similarly critical position on this proposition, Crawford draws a line between refusal and complicity. I think less pessimistically and absolutely – perhaps, as a coping mechanism against what might otherwise feel like an immense sense of ennui and despair at the AI-ification of everyday life, and the endless proliferation of AI companions and chatbots that appear in seemingly every tool I use to conduct my scholarly work. I also think that in the spirit operaismo, that refusal of subjugation to capital is also an act of asserting other forms of life – a gesture towards an alternative social order. Social movements and countercultures offer collective solidarities that can offer experiences in such alternatives, prefiguring new forms of social life. 

I do not accept the inevitability of AI determinism, but it seems clear that the amount of resources invested and committed to mass diffusion of AI systems into more and more facets of everyday life is not likely to change without a major catastrophe or economic shock. Unlike Crawford, I am also open to the possibility that AI diffusion and advancement could result in some tangible improvements to life, however unequally distributed those changes are. From a purely ‘pragmatic’ standpoint, using AI systems to manage increasing volumes of information and time demands seems to make a lot of sense – something I certainly experience as a PhD student. There are considerable efforts to regulate and reign in some of the worst social effects of unfettered AI proliferation; for instance, the EU’s recently passed AI Act is a model legislation, albeit with its own carve-outs for border security and law enforcement (European Union, 2024). As well, projects such as the Public AI Network work towards the possibilities of publicly governed AI systems that might serve as real social goods (Public AI, 2024). So returning to Crawford’s use of Lorde to foreclose possibilities of repurposing AI systems, I offer another cliche: How might we transform swords of AI domination into ploughshares for cultivating the global commons? This is a perspective I am particularly invested in in my scholarship, where instrumentality is part of the grand bargain. How is AI being used to subvert and redirect existing powers? What might AI systems that advance social, economic, and ecological justice look like, and who is working on them today? This is the path I choose to explore from here on out.

Peer reviewed by Andi Sciacca.

References

Burrell, J., & Fourcade, M. (2021). The society of algorithms. Annual Review of Sociology, 47, 213-237.

Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. MIT Press.

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Crawford, K., & Joler, V. (2018). Anatomy of an AI System. Anatomy of an AI System.

Daston, L. (2022). Rules: A short history of what we live by. Princeton University Press.

Ebrahim, A. (2023). OpenAI is a nonprofit-corporate hybrid: A management expert explains how this model works − and how it fueled the tumult around CEO Sam Altman’s short-lived ouster. The Conversation. https://theconversation.com/openai-is-a-nonprofit-corporate-hybrid-a-management-expert-explains-how-this-model-works-and-how-it-fueled-the-tumult-around-ceo-sam-altmans-short-lived-ouster-218340 

European Union. (2024). The Act Texts | EU Artificial Intelligence Act. https://artificialintelligenceact.eu/the-act/

Fisher, M. (2009). Capitalist Realism: Is There No Alternative?. John Hunt Publishing.

Gedeon, G., and Miller, M. (2024). Israel under pressure to justify AI use in Gaza. Politico. https://www.politico.com/news/2024/03/03/israel-ai-warfare-gaza-00144491

Google. (2024). “Artificial Intelligence” Search Trends. https://trends.google.com/trends/explore?date=2019-03-16%202024-03-16&q=%2Fm%2F0mkz&hl=en

Gray, J. (2016). “Let us Calculate!”: Leibniz, Llull, and the Computational Imagination. Public Imagination Review. https://publicdomainreview.org/essay/let-us-calculate-leibniz-llull-and-the-computational-imagination/

Hanna, A. (2022). https://twitter.com/alexhanna/status/1576786561915461634?lang=en

Hardt, M., & Negri, A. (2009). Commonwealth. Harvard University Press.

Haskins, C. (2024). Over 600 Google workers urge the company to cut ties with Israeli tech conference. Wired. https://www.wired.com/story/google-workers-letter-cut-ties-israeli-tech-conference/

Heidegger, M. (1977). The question concerning technology. Harper & Row.

Hogan, M. (2021). The data center industrial complex. Saturation: An Elemental Politics, 283-305.

Hogan, M. (2018). Big data ecologies. Ephemera, 18(3), 631.

Hogan, M. (2015). Data flows and water woes: The Utah data center. Big Data & Society, 2(2), 2053951715592429.

Johnston, S. F. (2020). Techno-fixers: Origins and implications of technological faith. McGill-Queen’s University Press.

McQuillan, D. (2022). Resisting AI: an anti-fascist approach to artificial intelligence. Policy Press.

Mislove, A. (2023). Red-teaming Large Language Models to identify novel AI risks. White House Office of Science and Technology Policy. https://www.whitehouse.gov/ostp/news-updates/2023/08/29/red-teaming-large-language-models-to-identify-novel-ai-risks/ 

Public AI Network. (2024). Public AI: AI As Public Infrastructure. https://publicai.network/

Tan, J. S., Nedzhvetskaya, N., & Mazo, E. (2023). Tech worker organizing: Understanding the shift from occupational to labor activism. arXiv preprint arXiv:2307.15790.

Weatherby, L. (2023). Metaphysics in the C Suite. Jacobin. https://jacobin.com/2023/11/openai-sam-altman-chatgpt-artificial-intelligence-big-tech-alignment

Webb, A. (2019). The big nine: How the tech titans and their thinking machines could warp humanity. Hachette UK.

Whittaker, M. (2023). Origin stories: Plantations, computers, and industrial control. Logic (s), 19.

Zukin, S. (2020). The innovation complex: Cities, tech, and the new economy. Oxford University Press.