As generative AI infiltrates our academic spaces more and more, liberal arts schools face a particularly troubling threat. Other types of institutions may be more focused on career preparation and, consequently, accept the experience of education as a means to an end. In that case, generative AI programs may be a welcome addition to the processes behind our academic products, so long as they streamline that process. But liberal arts schools are aiming at a loftier goal — one of thinking for its own sake, of growing our minds holistically, and situating our academic pursuits among a wider cultural conversation.
As a student who quite literally signed up to follow this model of education, I found President Carmen Twillie Ambar’s statement about Oberlin’s emergent Year of AI Exploration deeply worrying. From its first line asking us to type a prompt into ChatGPT to discern its own greatness, it reads like a sales pitch. It frames AI as something omnipotent and inevitable: an emblem of innovation so juicy we need to overhaul all operations and reallocate funds just to step into its world of boundless potential. Let’s acknowledge the reality of the situation: Kids are no longer learning how to write, the planet is being sucked dry, and our collective value system about the very essence of creativity is buckling beneath the weight of the machine.
Say we all learn to use AI “responsibly.” What would that mean? When our entire job as students is to learn how to think, where would be a good spot to introduce an entity that is designed to think for us? The life cycle of a written product, from its onset as a spark in our minds to its final form as words on a page, is necessarily full of awkward stages. We push and pull at our ideas, wrestling with them through outlines and rough drafts, before they finally settle into a coherent shape. Well-meaning AI optimists see programs like ChatGPT as friendly companions that can smooth over the wrinkles in our path to well-packaged creative realization, without understanding that turbulence is precisely where our ideas and intellects thrive. Creativity is not throwing an idea into a void and watching it pop back out in neat, aesthetic form — it is a slow, embodied, iterative process that needs all parts of itself to function. Despite AI becoming more and more popular, Oberlin students, and, more generally, progressive young people in academia, are notably silent even as mental alarms are sounding in our heads.
In the name of efficiency, humanity has already corrupted and devalued the productive processes behind so many of our resources and creative products. We are alienated from the labor that goes into the shirts on our backs, the phones in our hands, everything we see and interact with on a daily basis. It all just seems to appear, neatly assembled, right in front of us, whenever we decide that we need it. What stage of capitalism are we in when our own linguistic communication, arguably the most human thing about us, is next on the chopping block?
Maybe I’m getting too ideological, too lost in abstractions. (Classic Oberlin student, right?) I’m worried by the vague, reverent language used in Ambar’s statement, but I understand that our college wants to be helpful and proactive at a time when no one is quite sure what to do next. I’m even willing to believe that AI has a place in some industries, but we need to draw a very bold line very soon between the generative AI that can respond to user prompts and the larger umbrella of traditional AI that has been drifting around the ether for years.
Generative AI has garnered unique hype because it appeals directly to the layman consumer, indulging our impulse to be lazy as well as capitalism’s efficiency dogma. Traditional, non-generative AI can analyse large datasets, make inferences, and find patterns. It might be that AI software trained on medical literature could be a revolutionary tool for curing diseases, to the extent that its data crunching power outsizes ours — but that is not the technology President Ambar’s letter refers to. That is not the technology that Oberlin is set to pour a mighty helping of our already-fractured budget into, while our academic departments and the real people that staff them suffer. Such a gross reallocation of funds contradicts any notion that this initiative is purely “exploratory.” Dangle a trendy, all-expenses-paid luxury technology in front of anybody, and they will grab on to it, especially students bogged down by heavy workloads and those who haven’t yet had the privilege of building up their own ideological opposition. Even if we try to limit our school’s usage to the purely innocuous — if any generative AI-assisted task, however menial, can even be called that — we will start down a very slippery slope.
Near the end of Ambar’s letter, she states somewhat eerily, “AI is here. To ignore it would be to do so at our peril.” AI may be here, but so are we: a student body that is strong enough to protect our education against a cutting edge that has spun out of control, sick with reckless greed. Maybe the Silicon Valley prophets are right, and AI is inevitable. Maybe in 10 years, the framework of our cognition will be permanently intertwined with our computer software, and there will have been nothing a tiny liberal arts school can do about it. But we pride ourselves on being fanciful, on nurturing our wild intellects in spite of a world that wants to sterilize them into corporate fodder. So why stop now? Why let Big Tech into our bubble?
