artificial intelligence without understanding becomes artificial ignorance at scale

In my book The Other Side of Empathy I have a section called artificial ignorance. One of my core contentions with our over-reliance on predictive tools to help us think or show us who we are is that our usage of these tools assumes that the past has the answers not just about what we need now, but also what we need in the future. This does not account for things like changes in culture, practices, beliefs, etc. And, if this moment can teach us anything, it is that history is contested and changeable, depending on who lived it, who is recording it, who is sharing it and who is receiving it. That says nothing of the format or medium through which we receive history.  All of these things have an impact, and the current trajectory of AI seems particularly catastrophic considering how quickly people are implementing and wanting to be in conversation just so they won’t be left behind.

One of the things I am fascinated by is why the conversations about emerging technologies always tend to be ahistorical. AI in particular seems to be without an origin story or critical engagements. Where I think I have landed is that there is a two pronged reason why new technologies, especially electronic based ones, have to be described ahistorically. The first is, the development time frame is typically long and early iterations of the technology often have bugs. If the technology is interesting enough, people will continue to work out the bugs until the technology feels stable enough to enter society. Once the stable idealized version is in society, something magical or cursed can happen, depending on your orientation towards technology. Society changes the technology, or rather, the potential of the technology as people begin to integrate the technology into tasks where a technological intervention did not exist before. At that point, technology and society begin to change together. I will not say evolve, because technologies do not evolve. Rather, they are shaped by, more often than not, a small group of people that identifies a new opportunity that, for whatever reason, tends to cannibalize other sectors and industries to remediate our day to day lives through the technological tool. At that point, as a society, we lose the rote knowledge of how to perform the task without the technology.  This concept is illustrated abstractly in the McLuhanism: a change in scale is a change in kind. 

There is a limit in this McLuhanist approach though. While the technology has changed, the root intentions that the creators and refiners of the technologies have in mind do not leave because the technology has made its society debut and was accepted. Rather, society then amplifies those intentions and assumptions without realizing that is what they are doing. In that sense, the only new technology is the newly erased past, which also includes the early debates about the technology and early alarms that were sounded. Personally, I find those to be more fertile ground for beginning to understand my own reactions, especially as I am living in a society that quickly becomes ignorant of the past.

Then I worry about what these tools are actually doing when I think about how we talk about what these tools are good at doing, and what these actions and processes mean for our belief in human intelligence and potential. The idea that there is a universal or essential form of intelligence is inherently baked into AI. I am starting to do some reading to figure out what that imagined “essential” aspect may be. So far, the aspects of the equation I have floating around in my head are: take in enough information quickly, identify proximate concepts, and synthesize at the speed of electronic communication. And, even if the information is not the truth, correct, or accurate, the information needs to pass as fact-based to the lay human reader.

So then, it stands to reason that generative AI is the technology du jour, and likely transforming how we exist together and with information because generative AI can produce what we think good work looks like. That includes tone, details, format. What AI lacks is contextual clues outside of what has been fed in. Instead AI calculates at what seems like the speed of light, creating the illusion of cognitive process for the human interpreting the results. We, the people, receive content that, due to the machine’s ability to determine most likely probability etc, makes sense to us. The machine has no thoughts or feelings about the rightness or wrongness of our response, but the system may prompt us to tell submit more of our own intelligence by evaluating if the machines output was good by so the next calculation can get more thumbs up.

This all takes me back to where I started. Generative artificial intelligence produces artificial ignorance in humans where we think critical thought went into the production of a machine generated result. We’ve been left with no space for critical engagement, and instead are being left with overly intelligent magic 8 balls that can generate text and images and so much more in response to every thought we have. This current generative moment feels like it is both trying to strip away at our ability to create worlds that we haven’t even imagined yet but also, especially as historical documents disappear on a whim, are being used to create ensure we only have access to the right version of the past that leads us to a predetermined future.