Like all good horror films, there’s a moment in the creepy new film M3GAN (Dir. Gerard Johnstone, released nationwide on January 6, 2023) where the movie shows its hand, as if to say “see this isn’t just a schlocky, silly horror show, this is a film about something serious.”
Desperate to figure out what might be causing the super-advanced AI android M3GAN to develop both complex interiority and along with it murderous intent (not to mention a penchant for zingy one-liners), one of the characters blurts out, “it’s not programmed to have a conversation, it’s programmed to construct deliberate-sounding word salad from a sea of data and spit it back at you!” In other words, this titanium and silicon cyborg is supposed to be a $10,000 toy for privileged kids, not a thinking being with its own motives, intentions, and…humanity.
But of course, this being a horror movie, that’s exactly what happens. Don’t worry—there aren’t any spoilers in this post. I wouldn’t want to detract from the superb viewing experience and the uncanniest of uncanny valleys that this film presents in larger than life fashion. Seriously, if you get out and see one movie this weekend, check out M3GAN. It’s a fun romp with some thought-provoking red meat thrown in.
No, I bring this up because I think M3GAN (the acronym stands for “Model 3 Generative Android”) comes along at the perfect time in that it encapsulates anxieties around AI—and specifically OpenAI’s ChatGPT—that folks in higher education find themselves grappling with as a new semester begins.
It’s worth noting that M3GAN’s plot revolves around a young child in tough circumstances. Cady (Violet McGraw) is a young girl whose parents are violently killed in a snowplow accident at the start of the film. Cady goes to live with her aunt Gemma (Allison Williams) in suburban Seattle. However, Gemma is a Career Woman (the movie telegraphs this relentlessly), so she doesn’t have much patience or know-how when it comes to raising precocious nine-year-old girls whose parents have just been killed in a violent snowplow tragedy. When Cady asks if she can play with some of Aunt Gemma’s toys, she’s told no because they aren’t “toys, they’re collectibles.”
Gemma works as a high-level toy designer/robotics engineer at the Funki Toy company, and as the film opens, she’s pursuing a passion project of her own—much to the initial dismay of her short-tempered boss (played with zest and much anger by Ronny Chieng). Soon, even Gemma’s boss comes to see the enormous market value of a humanoid robot like M3GAN, especially one that can double as an always-on caretaker for children, one whose soft features and bland, gently-lilting voice can remind kids to flush the toilet and wash their hands, deliver timely sermonic interludes on death, and even rip the ears clean off the heads of childhood bullies.
In other words, the film cannily plops the horror right down into the midst of parents’ contemporary anxieties about screen time, net nannies, and even more existential challenges like finding time enough away from the insane demands of late capitalist careerism to actually build a meaningful relationship with your offspring. Writing in Variety, Owen Gleiberman notes,
“M3GAN” takes place in a world — namely, ours — where parents, bemoaning how much screen time they allow their children, give into the impulse anyway, because it feels both easy and inevitable. The film says that we’re already letting computer technology raise our kids. M3GAN the willowy programmed companion who always says the perfect thing becomes the logical culmination of that trend.
As we have been shown time and time again in the last several years, in cerebral horror entries like Ari Aster’s Hereditary (2018) and Jordan Peele’s Get Out (2017) and Nope (2022), the most satisfying scares come from films that elevate contemporary social anxieties, plucking these fears right out of our inner psyches and magnifying them on the proverbial silver screen. At the moment, this is precisely what AI-enabled chat generators like ChatGPT are doing to our biggest existential fears about the future of higher education and just what we’re all doing here in the first place.
Is ChatGPT higher ed’s M3GAN moment?
The arrival of ChatGPT in November rekindled a flame among faculty and higher education observers about plagiarism, writing, and the learning goals that underwrite so many of our assessments. I wrote a brief link post about this and Adam and I devoted an episode of Digital Gardening to discussing ChatGPT’s implications for teaching and learning.
There have been two broad types of responses so far. In one camp are faculty who view ChatGPT and the coming AI revolution more generally as an opportunity to drill down on why we assign so much writing in college in the first place and change our assessments accordingly. What are we trying to get students to learn, do, explore, and verify in these writing assessments? How can we revise these assessments to make them more resistant to the lure of ChatGPT? As a writing teacher and researcher, I have a lot of sympathy with this camp, believing (as I do) that learning to write is learning to think. Writing is a technology for synthesis, analysis, and (ultimately) intellectual exploration and dissemination that helps to usher students into the discourse of the academy and educated society, that helps them sharpen their thinking by forcing difficult encounters with others’ ideas and positions. This camp advocates such changes as more in-class writing sessions, more multimodal projects, and writing in a variety of genres that make it more difficult simply call up ChatGPT for a 500-word summary of Beowulf.
In the other camp are those who want to figure out a technological workaround—a way to regulate the use of AI-enabled tools like ChatGPT to make it more difficult for students to use them to cheat and plagiarize and therefore short-circuit the learning process. Perhaps you’ve heard by now of the Princeton undergraduate who spent part of his Christmas break developing an app that can determine whether an essay was penned by a human or by ChatGPT. The app’s creator, Edward Tian, formerly a data journalist for the BBC, calls it GPTZero, and it seems to work in a manner similar to Turnitin.com. Except that where Turnitin.com simply searches a massive database of already-submitted student papers and the open web to look for textual similarities, Tian’s GPTZero analyzes features of the text that might give away whether it was written by AI, such as its “complexity and burstiness” (whatever that is). Similarly, the creators of ChatGPT, OpenAI, have released a watermark feature that is intended to be a curb for students who want to simply order up an AI-written masterpiece for this week’s discussion post.
Personally, I have my doubts about the effectiveness of these technological fixes and regulatory schemes. To my mind, Beth McMurtrie, writing in The Chronicle of Higher Education, hits it right on the head when she writes,
Scholars of teaching, writing, and digital literacy say there’s no doubt that tools like ChatGPT will, in some shape or form, become part of everyday writing, the way calculators and computers have become integral to math and science. It is critical, they say, to begin conversations with students and colleagues about how to shape and harness these AI tools as an aid, rather than a substitute, for learning.
In doing so, they say, academics must also recognize that this initial public reaction says as much about our darkest fears for higher education as it does about the threats and promises of a new technology. In this vision, college is a transactional experience where getting work done has become more important than challenging ourselves to learn. Assignments and assessments are so formulaic that nobody could tell if a computer completed them. And faculty members are too overworked to engage and motivate their students.
Never before in history has the solution to a new technology been to find a way to undermine or work around that technology. Instead, I think ChatGPT and AI provide us with the perfect opportunity to consider how this tool can be harnessed to empower teaching and learning. If this means once and for all the end of boring, pointless, copy and paste essays and writing assessments—assignments that students dread writing and instructors loathe grading—then so much the better.
Perhaps the next iteration of writing assessments in higher education will finally take seriously what folks in rhetoric and composition studies have been saying for decades now: that writing is far more complex than most people allow. Finding a way to preserve the habits of mind and intellectual calisthenics that sound writing pedagogy enables in a world dominated by AI is going to mean not only affirming this complexity, but also interrogating the implications for education, politics, work, and life of these emergent human-machine assemblages.
This will have profound effects on how we conceptualize the university going forward. I say it’s about time.