Preface
For many years, as Head of School, I had the profound privilege of speaking at graduation ceremonies. There's something uniquely moving about that moment—watching young minds cross the threshold from one phase of life to another, carrying with them not just knowledge but the capacity to shape the world they're inheriting. Those ceremonies never failed to fill me with hope and a deep sense of responsibility for the future we were helping to create.
I miss that opportunity now that I'm no longer running a school. But perhaps that's why I find myself returning to this format, imagining what I might say to graduates today as they step into a world transformed by artificial intelligence. These thoughts are offered in the spirit of those ceremonies I remember so fondly—with deep respect for the journey ahead and genuine excitement about what this generation will accomplish.
When I was your age—and yes, I'm going to start with that phrase because sometimes the old clichés contain the deepest truths—when I was your age, we worried about nuclear war, whether video games would rot our brains, and acid rain destroying the forests. The anxieties haven't really changed, have they? The Cold War ended, but we still live with existential threats. Video game concerns morphed into social media fears. Acid rain became melting glaciers and rising sea levels. Same anxieties, different labels.
But I would argue that we are at a tipping point. Today, you're graduating into a world where machines can write poetry, analyze literature, engage in theological debate, and even help craft this very speech. The questions your generation faces aren't just about how much time we spend with technology, but about what it means to be human when machines can do things we once thought only humans could do.
The Inheritance
You've inherited a remarkable moment in history. Not just because of the technology itself, but because you're the first generation to grow up alongside AI that can truly participate in human conversation. Think about that for a moment. Previous generations learned to use calculators, computers, and smartphones as tools. You're learning to collaborate with systems that can think alongside you, challenge your ideas, and even surprise you with their insights.
But you've also inherited something else: centuries of human anxiety about what happens when our creations become too sophisticated. From Prometheus stealing fire from the gods to Frankenstein's monster, from medieval fears about automata to Victorian anxieties about industrialization, humans have always struggled with the same fundamental question: What happens to our humanity when we create things that seem almost human?
This inheritance isn't a burden—it's a gift. Because it means you understand, perhaps better than any generation before you, that the most important questions about AI aren't technical questions. They're human questions.
The Questions That Matter
As you leave this institution and enter a world increasingly shaped by artificial intelligence, you'll face questions that require exactly the kind of thinking your education has prepared you for. Not just the ability to find information or solve problems, but the ability to ask better questions.
The question isn't whether AI can think—it's how we want to think alongside it.
I've watched students use AI to write better essays, not by having it do the work for them, but by using it to challenge their assumptions, explore different perspectives, and push their thinking in new directions. I've also seen students use it to avoid thinking altogether. The technology itself doesn't determine which path you'll take. Your choices do.
The question isn't whether AI will replace human expertise—it's how human expertise will evolve.
Every field you might enter—law, medicine, education, social work, journalism, the arts—is being transformed by AI. But rather than making human expertise obsolete, AI is revealing just how complex and valuable truly human skills really are. The ability to understand context, to read between the lines, to build trust, to navigate ethical complexity—these become more important, not less, in an age of artificial intelligence.
The question isn't whether AI systems are conscious—it's how we want to live in relationship with them.
Consciousness may be the wrong framework entirely. Yes, we need to grapple with whether AI deserves rights and what threats it might pose—these are essential questions. But we also need to ask: How do these systems affect human relationships? How do they participate in our communities? What obligations do we have to ensure they serve human flourishing rather than diminishing it?
The Skills You'll Need
Your teachers have spent years teaching you to think critically, to analyze texts, to understand context, to question assumptions. They probably didn't realize they were also preparing you for the AI age, but they were. Because the skills that make you effective humanists are exactly the skills you'll need to be effective humans in an age of artificial intelligence.
Verification without paranoia. You've learned to check sources, to read laterally, to build webs of context around information. These skills will serve you well with AI, which can hallucinate facts as confidently as it can verify them. But the goal isn't to become suspicious of everything—it's to become skilled at distinguishing signal from noise.
Deep reading in a world of surface interaction. AI can provide quick answers, surface-level analysis, and efficient summaries. Your ability to read deeply—to understand nuance, to grapple with complexity, to sit with ambiguity—becomes more valuable, not less, in a world of instant responses.
Ethical reasoning about unprecedented situations. The ethical frameworks you've studied—from ancient philosophy to contemporary applied ethics—weren't designed for a world of artificial intelligence. But the skills of ethical reasoning, the ability to think through consequences and consider different stakeholders, will help you navigate questions that have no historical precedent.
The Responsibility
But with these skills comes responsibility. You're graduating into a world where AI development is largely happening in corporate boardrooms and government agencies, often without the perspectives that humanities education provides. The questions about AI's impact on human dignity, creativity, and community—these aren't side issues to be addressed after the technology is built. They're central questions that should shape how the technology is developed in the first place.
You have a responsibility to join these conversations. Not because you need to become programmers or policy experts, but because you bring perspectives that are desperately needed. When tech leaders talk about "alignment," they often mean aligning AI with human preferences or values. But whose preferences? Which values? These are fundamentally humanistic questions that require humanistic thinking.
The future of AI won't be determined by algorithms. It will be determined by the choices humans make about how to live alongside these systems. Your generation will make many of those choices.
The Opportunity
Here's what excites me most about your future: You're inheriting powerful tools at a moment when the most important work requires distinctly human capabilities.
Climate change, social inequality, political polarization, global health challenges—none of these problems will be solved by AI alone. They require the ability to understand human complexity, to navigate cultural differences, to build coalitions across difference, to communicate effectively, to think ethically about long-term consequences. In other words, they require exactly the skills that a humanistic education develops.
AI might help you research more efficiently, analyze data more quickly, or communicate more broadly. But the creativity, empathy, and wisdom needed to use these tools well—that comes from you.
Moving Forward
As you leave here today, you're not just graduating into the AI age—you're inheriting it. The relationship between humans and artificial intelligence won't be determined by the technology itself, but by the choices you make about how to engage with it.
You can choose to be thoughtful early adopters rather than uncritical enthusiasts or fearful resisters. You can choose to ask better questions rather than simply seeking easier answers. You can choose to use these tools to enhance human connection rather than replace it.
Most importantly, you can choose to remember that the most profound questions raised by artificial intelligence aren't technical questions about what machines can do. They're human questions about who we want to be.
The technology will keep evolving. The fundamental questions about human dignity, creativity, relationship, and meaning—these remain constant. Your education has prepared you not just to adapt to change, but to help shape it.
The world needs your perspective. The questions that will define this era of human history need your thinking. The future of human-AI interaction needs your voice.
Go forth and be thoughtfully, critically, courageously human. The world is waiting for what you'll do next.
Congratulations, Class of 2025.
Great advice.
3 questions:
1. Was any (or all) of this generated by AI?
2. Did you actually run a school?
3. If yes, which school was it and when did you run it?
These questions are to clarify what is real &/or truthful in this generally thoughtful, thought-provoking piece.
Because no matter how well-intended, parables & other fictional writing should not pretend to be actual first-person experiences of the author.