Article: Course and Assignment Design

How GenAI Affects Our Writing and Thinking


Our Recommendation

This short essay is full of insight into how we should think about student (and our own) AI use. O'Rourke's insights should help to inform strategies for motivating students to complete assignments on their own.

I came to feel that large language models like ChatGPT are intellectual Soylent Green — the fictional foodstuff from the 1973 dystopian film of the same name, marketed as plankton but secretly made of people. After all, what are GPTs if not built from the bodies of the very thing they replace, trained by mining copyrighted language and scraping the internet? And yet they are sold to us not as Soylent Green but as Soylent, the 2013 “science-backed” meal replacement dreamed up by techno-optimists who preferred not to think about their bodies. Now, it seems, they’d prefer us not to think about our minds, either. Or so I joked to friends....

When I write, the process is full of risk, error and painstaking self-correction. It arrives somewhere surprising only when I’ve stayed in uncertainty long enough to find out what I had initially failed to understand. This attention to the world is worth trying to preserve: The act of care that makes meaning — or insight — possible. To do so will require thought and work. We can’t just trust that everything will be fine. L.L.M.s are undoubtedly useful tools. They are getting better at mirroring us, every day, every week. The pressure on unique human expression will only continue to mount. The other day, I asked ChatGPT again to write an Elizabeth Bishop-inspired sestina. This time the result was accurate, and beautiful, in its way. It wrote of “landlocked dreams” and the pressure of living within a “thought-closed window.”

Let’s hope that is not a vision of our future.