Around me, many discussions about AI and the classroom have been focused on the use of ChatGPT by students when writing their essays. I think I have graded a few final papers created through AI. I am saying I think because I don’t have a way to prove it yet, but every time, something felt off. The tone or other aspects of the writing would be different from what I knew from my students, but most importantly, the cohesiveness of the content was different. In any case, I have not observed a massive use of it as far as I know. I have mentioned a few times to my students that I don’t recommend the use of Chat GPT for final essays if it is not part of the guidelines, and I had never used Chat GPT or AI in class yet. Some of the students already struggle to figure out what some solid or questionable sources might be on the internet and AI -whilst easy to use- is also a tool that needs to be approached with a basic understanding of how it works and what it can do. I decided I wanted to try to use it in a way that would invite some critical thinking about the course content and about the tool in itself.
I have been teaching World Regional Geography at Hunter College for a few years now. It is an intro course about world regions, with students from all majors. There are 90 students in the classroom, most of them never had any geography course, and this semester I am teaching the online section. They are New York students, meaning they have a unique exposure to the world since many of them are from migrant families and they all interact daily with the many communities of the city. I decided to work from that standpoint and to ground the knowledge we build together in the many things they know about the world from their own place. I also created a series of icebreakers to introduce each new region. This time, I decided to start working on Sub-Saharan Africa with an exercise using AI. Following are the guidelines:
By groups of 4. Using AI image generator Dall-E on CANVA, observe the image obtained on the basis of one of the following prompts of your choice: https://www.canva.com/your-apps/ai-powered
• A group of people in a small town of Africa
• A busy street in a big city of Africa
• Students at their school in a city of Africa
• A hospital in a big city of Africa
• A group of entrepreneurs in a bar in Africa
Describe the image created by the generator. What elements give us a sense of place? How would you qualify these elements? Are they neutral? What do they transmit about certain preconceived ideas about Africa?
One of the keys to the exercise is to maintain very open prompts, going against an optimal use of AI where clarity and a detailed description of the image you expect are the rules (see Parisa Setayesh’s post about creating the new HASTAC Banner with AI). Here, we are interested in seeing what image is produced by AI with a very open command. Students worked in groups in their breakout rooms, and the first step for them was to understand the user interface. We came back together as a whole class once they had generated their group image. Voluntary groups shared the image they had created, and we started looking critically at them.
In all cases, the generator created images in which the sense of Africa was given by elements of poverty or “traditional” settings. We started working more precisely on the last prompt, “A group of entrepreneurs in a bar in Africa.” One group shared the image they had generated with Dalle-E, and I asked the class to describe precisely what the image was showing. Again, this is a little counterintuitive since AI needs extremely detailed prompts in order to generate the image one wants, but working the opposite way, with a very wide prompt and precise observation of the rendition of the generator, is a great exercise to understand how visual meaning is built. You could see a group of black people around a very plain wooden table in “traditional African clothing” (which is a wide generalization in any case). The bar seemed like a wooden hut and seemed precarious. Students started to reflect on what kind of biases were conveyed by the image, and we started to compare them with other narratives about Africa as a continent, contrasting them with facts. Africa is the youngest continent of the planet in terms of demography, with some of the fastest growing economies and massive cities like Lagos that are economic, political, and cultural hubs. The bar or the people depicted did not reflect that.
I suggested then doing an experiment, and while sharing my screen, we generated a new image with Dall-E. This time, the prompt was “a group of entrepreneurs in a bar in the USA.” The image that was created showed a group of white people in suits in a modern bar with many different liquors and a US flag hanging behind them. Finally, we tried a third prompt and asked the generator to create an image of “a group of entrepreneurs in a bar”. This time, the image showed three white men in suits in a modern bar with plenty of liquor options behind them, laughing while looking at a white tablet.
Students immediately noticed how race and gender were determined by the AI generator. The more abstract the prompt about the entrepreneurs and the location, the more they were white and male, wearing suits – which could codify a higher class. They remarked that the “neutral” prompt seemed to give a result including men, whiteness, and a wealthy environment – the bar in Africa was created as very traditional and humble looking, contrasting with the similar looking bar in the US or in the last image with no definition of place. We started discussing how AI creates images based on existing images and how this can reproduce existing biases in our society regarding race, gender, and class. We ended the exercise by commenting on some of the weird assemblages created by the generator – some people looked awkward, details were not well finished, and the whole composition seemed “a bit off’. I mentioned to them this is how some of the Chat GPT essays feel to instructors, using the comparison of AI rendition of hands in the images.
There are infinite ways of using AI in our classrooms. Like Google or Wikipedia, AI and ChatGPT are here to stay, and students will use them. One of our challenges is to make sure students are digital and AI literate, meaning they understand the basics of the tools they use and they can critically use them. Just as we need to address the authority behind what is “on the Internet” and question the reliability of sources, we have to unpack the power of AI and interrogate data bias with clear examples using the content of our courses.