There's been a lot of buzz lately about AI-generated historical figures—whether it's an AI Agatha Christie guiding students through writing exercises, or an AI Anne Frank saying something distressingly naïve about the Holocaust. Reactions tend to swing between fascination and outrage. Critics worry that using AI to recreate historical personas distorts truth, disrespects legacies, or contributes to a world where we can't trust anything anymore.
I get it. These concerns aren't coming from nowhere. But I'd argue that they're often misdirected, or more precisely, they reflect problems that already exist in every medium we've ever used to engage the past—books, films, museums, even reenactments. AI undoubtedly intensifies them, makes them more visible, and (if we do it right) gives us a better chance to confront them head-on.
Let me walk through the five most common criticisms I hear—and offer a different way to think about what's really at stake.
1. "It's Wrong to Recreate a Historical Figure Without Their Consent"
This is probably the most morally serious objection, and in some cases, it absolutely holds. If we're using a real person's likeness or voice to endorse a product, manipulate public opinion, or invent positions they never took, we're on shaky ground. But this is not a problem unique to AI—it's a problem of bad faith interpretation, and it applies just as much to traditional biographies or dramatizations.
The history of representation is littered with examples: from Shakespeare's reimagining of historical monarchs to Oliver Stone's controversial portrayals in films like "JFK" and "Nixon," we've long grappled with ethical questions around depicting real people. The 2020 series "The Crown" sparked debate about its fictional conversations between historical figures, with some critics arguing it misrepresented real people who couldn't defend themselves.
What matters isn't whether we're using AI. What matters is how we're using it.
We need guidelines. Transparent sourcing. Clear framing. And yes—sometimes a firm "no" when a simulation would cross ethical lines. But we also need to recognize that history is always interpretive. A well-designed AI persona grounded in primary texts and scholarly interpretation doesn't erase consent—it extends historical conversation. It lets students ask, "Where did this claim come from?" and demand to see the vellum, as one of my old professors used to say.
2. "People Will Think the AI Version Is the Real Person"
This is a real risk—but again, it's not new. People have taken Oliver Stone films as historical truth. They quote fictionalized memoirs like gospel. Consider how many Americans believe they understand Abraham Lincoln through Daniel Day-Lewis's portrayal, or how "The Social Network" shaped public perception of Mark Zuckerberg despite significant fictional elements.
The difference with AI is interactivity. It feels more personal. And that feeling of authenticity can lead to confusion.
So what's the answer? Design with friction.
Let AI personas say:
"This response is based on a diary entry from July 1944—would you like to read it?" Or: "That's one interpretation. Another scholar reads it differently—want to compare?"
This approach is already being implemented in some contexts. The United States Holocaust Memorial Museum's "Dimensions in Testimony" project allows visitors to ask questions of Holocaust survivors through AI-enhanced video recordings, but clearly frames these as recordings, not live interactions, and provides context for the testimonies. The USC Shoah Foundation applies similar principles to its interactive biographies.
Don't hide the mediation—expose it. Let students argue with AI Plato or AI Anne Frank the way they'd argue with a book, a professor, or a classmate. Teach them that history is constructed, debated, and revised. That's exactly what we want from humanistic education.
3. "This Is a Slippery Slope to Commercializing the Past"
Yes. And we're already well down that slope.
We've been selling the past for centuries—in textbooks, movies, video games, historical tourism. From Colonial Williamsburg to "Assassin's Creed," from souvenir Roman gladiator costumes to Broadway's "Hamilton," commercialization of history is hardly new. The question isn't whether AI will commercialize the past. The question is whether educators, scholars, and the public will engage early and ethically, before the worst actors set the tone.
If we don't want a future of AI Churchill selling cryptocurrency or AI Aristotle analyzing NBA trades, then the best thing we can do is build non-commercial, pedagogically grounded models that set a higher standard.
Don't retreat. Lead.
4. "This Violates the Integrity of the Author or Thinker"
This concern is meaningful—especially for literary figures whose voices and styles were painstakingly crafted. But again, the problem isn't the use of AI—it's the quality of interpretation. Bad AI personas are just bad literary criticism with a user interface.
Take Nietzsche, for example. For decades, much of what we thought we "knew" about him came through the lens of his sister, Elisabeth—a Nazi sympathizer who selectively edited his works to suit her ideology. The result? A massive distortion of his philosophy, completely divorced from his own values.
That wasn't AI. That was a human with an agenda.
AI might actually help us untangle those distortions—if we train it on the right texts, document its sources, and let it model multiple, even contradictory, interpretations. When AI Nietzsche says something, we can demand to know: Is that from Thus Spoke Zarathustra, from a scholar, or from a later political hijacking?
5. "Won't This Replace Teachers?"
No. It changes the role of the teacher—but it doesn't eliminate it.
Just as calculators didn't end math instruction and Google didn't replace librarians, AI won't replace teachers. It might replace bad worksheets and surface-level lectures. And thank goodness. Because what AI does best—on-demand access to information, simulation of debate, modeling of scholarly dialogue—frees teachers up to do what they do best: mentor, contextualize, challenge, guide.
And if you're worried about making the past feel like a product? It already is. AI doesn't change that. What it can change is the depth of the experience—if we design and use it wisely.
The Current Technical Reality
Before embracing AI historical personas, we must acknowledge their significant technical limitations. Current large language models struggle with several challenges that affect historical representation:
Hallucination of historical details: AI systems frequently generate plausible-sounding but entirely fictional information, particularly problematic when representing historical figures.
Temporal consistency: Models often struggle to maintain accurate period-appropriate knowledge, language, and worldviews without anachronistic slippage.
Nuance in representing evolving views: Historical figures often changed positions throughout their lives, but AI tends to flatten these complexities into simplified representations.
Source prioritization: AI models lack sophisticated historiographical judgment about which sources should carry more weight for different types of questions.
These limitations don't mean we should abandon the technology, but they do mean we need to design AI historical personas with these constraints in mind, building systems that acknowledge uncertainty and limitations rather than projecting false confidence.
Evaluating AI Historical Personas: A Framework for Humanities Professionals
For educators, curators, and humanities professionals considering using AI historical personas, here's a practical framework for evaluation:
Source transparency: Does the system clearly indicate which primary and secondary sources inform its responses? Can users access these sources directly?
Epistemic humility: Does the AI acknowledge uncertainty, competing interpretations, and the limits of historical knowledge?
Historiographical awareness: Does the system represent different schools of historical interpretation rather than presenting a single authoritative narrative?
Contextual framing: Is the AI persona presented with clear context about its nature, limitations, and the interpretive choices made in its creation?
Ethical boundaries: Are there clear guidelines about sensitive topics, especially for historical figures who experienced trauma or persecution?
Pedagogical purpose: Does the AI historical persona serve clear educational goals that couldn't be achieved as effectively through other means?
Cultural authority: Have relevant cultural, community, or scholarly stakeholders been consulted, especially for figures from marginalized groups?
This framework encourages a thoughtful, critical approach that aligns with humanistic values while acknowledging both the potential and the limitations of the technology.
Final Thought: Fidelity and Protection
Here's the bottom line for me:
The dead deserve fidelity. The living deserve protection.
When we work with AI versions of historical figures, we owe them honest representation, source transparency, and interpretive humility. But the truly urgent ethical frontier is the use of generative AI to deepfake living people—to smear reputations, spread lies, or destroy lives. That's where the real danger lies.
So let's not throw out the possibility of using AI to teach history well. Let's seize it—and build the ethical, interpretive, and pedagogical scaffolding to do it right.
The past isn't going away. But how we engage with it is changing fast. As humanities professionals, we have both the expertise and the responsibility to shape these technologies before they shape our understanding of history. Whether you're a teacher, historian, museum professional, or simply someone who cares about how we represent our shared past, now is the time to engage—to bring our humanistic values and methods to the development of these new tools.
Let's meet the moment with care, creativity, and a healthy demand to always—show the vellum.