I sat fuming in the meeting as my colleague described using ChatGPT to help write emails and Teams posts. Before I could stop myself, I launched into a rant: "Can't we just write our own emails anymore? Is it really that hard to compose a simple message? What are we losing when we start copying and pasting from a machine?" Even as the words left my mouth, I felt that peculiar shame of the self-righteous – I was being a curmudgeon, holding forth on something I knew nothing about and dismissing the thoughtful ideas of colleagues I respected. That moment stayed with me, prompting me to explore ChatGPT and Claude myself, to understand what these AI assistants could and couldn't do. What I discovered over the summer of 2024 humbled me: for many routine business communications, AI assistants weren't just serviceable – they were remarkably effective and efficient, producing strong drafts in a fraction of the time I'd spend staring at a blank screen. I owed my colleagues an apology (which I made and they accepted). But more than that, I owed myself a deeper understanding of what I'd been so quick to dismiss.
My reaction in that meeting, I realized, came from somewhere deeper than mere technological skepticism. As someone with a PhD in history, years of teaching humanities at every level from middle school to college, and extensive experience in educational administration and curriculum development, I had built my career on deeply human forms of interaction and learning. I was – and am – a humanist to my core. But as I continued exploring AI over the following months, something unexpected happened: rather than threatening my humanist sensibilities, the technology began raising questions that seemed to demand exactly the kind of careful analysis and contextual thinking that humanities training cultivates. How do we understand meaning in human-AI interactions? What happens to creativity when it's augmented by machine intelligence? What does AI reveal about how we think, write, and create? These weren't just technical questions – they were fundamentally humanistic ones, the kind I'd spent my career helping students explore in different contexts. I found myself drawn into an investigation that required both skepticism and openness, critical thinking and curiosity – in short, the very tools of humanities scholarship.
My initial knee-jerk reaction (emphasis on the jerk) blinded me to something fundamental: this was, in many ways, a dream come true for me. While I'd always been interested in technology and computers, unlike many of my friends, I never had patience or interest in programming. Now here was something revolutionary – the ability to get a computer to do what I wanted simply by telling it what I wanted in plain English. As I explored this new way of interacting with machines, the questions that emerged weren't just about utility but about the nature of the interaction itself. How was this even possible? What did it mean that I could have what felt like a real conversation with a machine? These explorations released a cascade of questions that went to the heart of human experience: questions of authenticity and originality, the limits of consciousness, and the psychology, sociology, and anthropology of human-AI interactions.
Dismissal of the new, whatever motivates it, is not useful in this case. Far from being the death of humanistic ideas and ideals, this new technology provides tremendous grist for the humanist mill. It compels us to engage in new ways and from fresh perspectives with questions that have always been the domain of the humanist. What I realized, with growing excitement, was that my training as a humanist – the kind of questions I and my fellow humanists have always asked – are exactly the tools we need to help humanity handle this new technology responsibly and for human benefit.
A particular moment crystallized this for me. I was using ChatGPT to help analyze classroom instructional data, the kind of work I'd done manually for years. The AI quickly (less than 30 second) identified patterns and trends across different classes and teaching approaches – impressive, but not surprising. What caught me off guard was when I asked it to explain its reasoning. Its response revealed connections in the data I hadn't considered, forcing me to examine my own assumptions about how we understand effective teaching and learning. This wasn't just about efficient analysis; it was about confronting fundamental questions about data, interpretation, and meaning – questions that humanists are uniquely trained to address. Who decides what patterns matter? How do we balance quantitative insights with qualitative human experience? What assumptions are we building into our AI-assisted analyses? These aren't technical questions waiting for technical answers – they're human questions that demand humanistic thinking. And as AI tools become more deeply embedded in our educational systems, having humanists at the table isn't optional – it's essential for ensuring AI serves human needs and values.
Consider how a historian approaches a document: we examine not just what it says, but its context, its unstated assumptions, its potential impacts on different groups of people. We ask who benefits, who might be harmed, whose voices are heard and whose are silent or silenced. These same analytical tools become vital when examining AI systems and their outputs. When an AI makes a recommendation, suggests a course of action, or produces content, we need to apply the same rigorous ethical and contextual analysis. What assumptions are built into its responses? Whose perspectives might it be privileging or excluding? What are the potential consequences – not just practical, but social and ethical – of implementing its suggestions? These are questions that humanists are trained to ask and explore.
What's at stake if humanists remain on the sidelines? The dystopian scenarios that keep people awake at night – AI systems that disregard human dignity, algorithms that amplify social inequities, language models that homogenize human expression, automated decisions that prioritize efficiency over empathy – become far more likely without humanist involvement. Consider what happens when AI systems are developed and deployed without deep consideration of ethics, human experience, and cultural context: automated hiring systems that perpetuate historical biases, content filters that suppress minority viewpoints, or educational tools that standardize learning at the expense of individual growth and creativity. These aren't just hypothetical risks – they're already emerging in systems designed without sufficient humanitarian perspective. The technical experts building AI systems are asking "Can we?" Humanists must be there to help answer "Should we?" and more importantly, "How can we do this in ways that enhance rather than diminish human flourishing?"
My journey from dismissive skeptic to engaged observer taught me something crucial: when it comes to AI, humanists can't afford to be either Luddites or cheerleaders. We need to be what we've always been – careful observers, critical thinkers, and above all, defenders of human flourishing. The questions that AI raises – about consciousness, creativity, meaning, and ethics – are precisely the questions humanists have grappled with for centuries. Our training and tools aren't just relevant to the AI revolution; they're essential to ensuring it serves human needs and values.
In the coming weeks and months (years?), we'll explore specific ways humanists can engage with AI, examining topics like how AI changes the nature of classroom discussions, what machine creativity means for human artists and writers, what happens to human relationships when AI enters the conversation, and whether our long philosophical tradition of studying human consciousness might help us recognize - or rule out - the emergence of machine consciousness. I'll also be sharing regular updates on the latest developments in AI from a humanist's perspective - everything from new research on how AI impacts student writing to emerging discussions about AI consciousness. For now, though, I'll leave you with this: If you're a humanist who's been either dismissing or dreading AI, consider this an invitation to join a different kind of conversation – one that needs your voice, your training, and your perspective.
Sean- I really appreciate your willingness to revisit a reasoning—AI and otherwise—that is not your own. I think this is a testament to greatness. Something everyone included, needs to be reminded of. A great food for thought.
Agree with the need for a good dose of critical thinking as we are exploring interactions with AI in its primitive form, the ubiquitous chatbot interface.
I just discussed the need for human evaluation with one of my colleagues. A related and interesting tidbit, as we were trying to explain to ourselves how many people are non-critically using ChatGPT or Perplexity, even though it is easy to show the errors and hallucinations that show up: the Gell-Mann Amnesia effect.
As a result, it is important to realize when we are interacting with AI in the form of LLMs mostly, in fields that we are knowledgeable about, and we are not.