In recent years, tech leaders and AI researchers have increasingly raised alarms about artificial intelligence as an existential threat to humanity. We've seen open letters calling for development pauses, congressional testimonies warning of unprecedented risks, and headlines debating whether superintelligent machines might someday destroy civilization. But why does the West view artificial intelligence with such profound anxiety?
As a historian of ideas, I find myself fascinated not just by these fears themselves, but by their deep cultural roots. The terror of machines surpassing their creators, of artificial beings threatening humanity, of technology escaping control - these aren't new anxieties. They're deeply embedded in Western philosophical and religious thought, appearing in forms that would be instantly recognizable to a medieval theologian or Victorian social critic.
This isn't just historical trivia. Understanding these intellectual foundations helps explain why discussions of AI in the West often take on almost theological overtones, with terms like "superintelligence" echoing divine attributes and fears of AI "alignment" mirroring age-old questions about moral agency and free will. It also helps us recognize that our AI anxiety isn't inevitable or universal - it's the product of specific cultural and philosophical traditions.
In this two-part exploration, we'll first trace how Western thought has shaped our relationship with artificial intelligence, from Christian theology's emphasis on human uniqueness to Enlightenment dreams of mechanical progress and Romantic fears of dehumanization. Then, in part two, we'll examine how a different cultural tradition - that of Japan - has produced strikingly different attitudes toward AI and robots, offering fresh perspectives on human-machine relationships.
My hope isn't to dismiss Western AI concerns - many raise vital ethical questions that deserve serious consideration. Rather, by understanding the historical and cultural lenses through which we view AI, we might develop more nuanced frameworks for thinking about these technologies. After all, recognizing our inherited assumptions is often the first step toward moving beyond them.
A note before we begin: The history of Western thought on consciousness, machinery, and human nature is vast and complex, filled with nuanced debates and competing perspectives that have evolved over centuries. What follows is necessarily a simplified narrative, tracing one particular thread through this rich intellectual tapestry. While we'll have to leave many fascinating thinkers and arguments unexplored, I believe this thread illuminates something crucial about our contemporary relationship with artificial intelligence.
The Theological Roots of Human Exceptionalism
The idea that humans occupy a unique and elevated position in creation runs deep in Western theological and philosophical thought. The Biblical concept of Imago Dei—humans made in the image of God (Genesis 1:26–27)—established a fundamental divide between human and non-human that continues to shape how we think about artificial intelligence today. This framework does not merely suggest human superiority; it explicitly ties human uniqueness to reason, consciousness, and moral understanding (Aquinas, Summa Theologica, I, q. 93, a. 4).
Medieval theologians, particularly Augustine of Hippo (354–430) and Thomas Aquinas (1225–1274), interpreted Imago Dei in ways that emphasized rationality and moral agency as defining characteristics of humanity. Augustine argued that human intellect reflected divine wisdom (De Trinitate, XV.12), while Aquinas insisted that reason and free will—not merely physical form—were what made humans God-like (Summa Theologica, I, q. 93, a. 6). These interpretations reinforced the idea that non-human entities, no matter how advanced, could never possess true understanding or moral agency because they lacked a rational soul.
This intellectual inheritance helps explain why modern AI discussions often center on questions of consciousness and "genuine" intelligence versus mere simulation. The long-standing assumption that human cognition is categorically different from all other forms of intelligence informs contemporary AI skepticism. For instance, John Searle's famous 'Chinese Room' argument (1980), insists that AI, regardless of its complexity, can only manipulate symbols without true understanding. This argument echoes medieval debates about whether non-human entities could possess reason or moral awareness. Similarly, contemporary AI theorists like Nick Bostrom (2014) and Stuart Russell (2019) continue to contrast human intelligence with artificial processing, often without questioning the assumption of human exceptionalism.
This pattern remains visible in contemporary AI discourse. When ethicists debate whether AI can possess "real" understanding or "genuine" consciousness, they are grappling with questions that would have been familiar to medieval theologians debating the nature of the soul. The key difference is that we have largely replaced theological language with technical terminology, shifting from discussions of soul and divine reason to neural networks and computational cognition—but the underlying assumption of human uniqueness remains intact.
From Medieval Philosophy to Mechanical Reality
The theological framework of human uniqueness evolved during the Scientific Revolution and the Enlightenment, as thinkers shifted from emphasizing divine origins to rational thought as the defining trait of humanity. However, this transformation did not challenge human exceptionalism—rather, it redefined it, grounding it in reason and consciousness rather than in a soul created in God's image
No figure exemplifies this transition better than René Descartes (1596–1650). His famous dictum, Cogito, ergo sum ("I think, therefore I am"), established consciousness as the foundation of knowledge (Discourse on Method, 1637). Descartes' mechanistic philosophy framed animals and material bodies as purely physical entities, operating according to deterministic, mechanistic laws, while human reason was of an entirely different order (Passions of the Soul, 1649). This created a stark dualistic divide between human consciousness and reason (res cogitans)—unique, immaterial, and self-aware—and the mechanistic operations of the material world (res extensa)—governed by physical laws, devoid of true understanding.
Descartes was particularly fascinated by automata—the sophisticated mechanical devices of his time, such as clockwork figures and hydraulic-powered statues (Treatise on Man, 1664 [posthumous]). While he acknowledged their impressive complexity, he insisted that no machine, no matter how advanced, could truly think or reason. In Discourse on Method (1637), Descartes argued that machines could never truly use language flexibly or engage in rational thought as humans do. He later expanded on this idea in Treatise on Man (1664, posthumous), where he analyzed automata and mechanical bodies, reinforcing his dualistic division between human consciousness (res cogitans) and purely physical processes (res extensa).
This Cartesian framework—consciousness and reason as uniquely human traits, fundamentally different from mechanical processes—continues to shape modern AI debates. When critics argue that AI systems are "just pattern matching" or "merely statistical models," they are echoing Descartes' distinction between genuine human understanding and mechanical imitation (Dreyfus, What Computers Still Can't Do, 1972; Searle, Minds, Brains, and Programs, 1980). The belief that artificial intelligence can never achieve true thought, only the illusion of it, has deep roots in this Cartesian distinction between mechanical causation and conscious reason, a division that remains central to AI skepticism today.
The Machine Age: Progress and Resistance
As mechanical philosophy transitioned from theoretical discourse to industrial reality, abstract debates about human uniqueness took on immediate social and economic significance. The Industrial Revolution (1760-1840) did not just introduce new technologies; it fundamentally reshaped the relationship between humans and machines, turning philosophical questions about human nature into urgent material concerns.
The first wave of industrialization sparked not just resistance to automation, but deep anxieties about how mechanization was altering human society and consciousness. While industrialists celebrated mechanical efficiency, arguing that machines liberated workers from drudgery and increased national wealth, a deeper philosophical anxiety emerged. Adam Smith's The Wealth of Nations (1776) praised the division of labor and specialized, repetitive tasks as keys to maximizing production, but critics saw something more sinister at work: the reduction of human beings to mechanical components in vast industrial systems.
The Luddite movements (1811-1816) embodied this tension. Far from being simple protests against job loss, they reflected a deeper philosophical anxiety that industrialization was reducing humans to mechanical extensions of machines. Thomas Carlyle captured this fear in Signs of the Times (1829), lamenting that society had been overtaken by a "Mechanical Age" in which not just work, but human thought and morality were being mechanized.
This transformation of human labor into mechanical process found its most chilling advocate in Andrew Ure. In his 1835 work The Philosophy of Manufactures, Ure didn't just celebrate industrial efficiency - he explicitly promoted the reshaping of human nature to match mechanical precision. Factory work, in his vision, wasn't merely about production; it was about molding human beings themselves into predictable, machine-like components of the industrial system.
Writers like Samuel Butler saw the darker implications of this mechanical reshaping of humanity. His 1872 novel Erewhon turned industrial optimism on its head, using satire to explore not just whether machines might evolve beyond human control, but whether humans themselves were already devolving into mechanical automatons under industrial capitalism. What began as a simple story about machines taking over became a profound meditation on how industrialization was already taking over human nature itself.
But it was Karl Marx who developed the most sophisticated analysis of this mechanical transformation of human nature. In his Economic and Philosophic Manuscripts of 1844, Marx went beyond simple concerns about job displacement to examine how industrial production was fundamentally alienating humans from their own creative and intellectual capabilities. Workers weren't just operating machines - they were being operationalized, turned into pseudo-machines themselves. By reducing complex human labor to simple, repetitive tasks, industrial capitalism wasn't just producing goods; it was producing a new kind of human being, one shaped by and for mechanical processes.
From Victorian Fears to AI Ethics: The Evolution of Mechanical Anxiety
The Victorian anxieties about the mechanical transformation of human nature did not fade with the industrial age—they evolved, resurfacing with each new wave of technological advancement. By the mid-20th century, as early computers began demonstrating increasingly sophisticated capabilities, the old questions about machines and human identity took on new urgency (Heims, John von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death, 1980).
Norbert Wiener (1894-1964), the founder of cybernetics, found himself wrestling with the same concerns that had troubled Marx and Butler before him. Like his Victorian predecessors, Wiener worried about more than technological displacement—he feared that mechanization might fundamentally reshape how humans perceived intelligence and autonomy itself. In The Human Use of Human Beings (1950), Wiener warned that the real danger wasn't that machines might outperform humans, but that we might come to view human thought and behavior through an increasingly mechanical lens. While his concerns about mechanization changing human behavior parallel certain aspects of Marx's theory of alienation, Wiener approached the issue from an engineering and ethical perspective rather than an explicitly economic or social framework.
As artificial intelligence emerged as a distinct field in the 1950s and '60s, these philosophical concerns gained new urgency and specificity. Joseph Weizenbaum, a pioneering computer scientist, found himself transformed from AI innovator to thoughtful critic through his experience with ELIZA, an early natural language processing program he developed in the 1960s. Though ELIZA merely simulated conversation by mimicking a Rogerian psychotherapist, Weizenbaum was deeply disturbed by how readily people ascribed genuine intelligence and understanding to what he knew was simply pattern-matching (Weizenbaum, 1966). This experience led to his seminal 1976 work Computer Power and Human Reason, where he argued that AI posed not just technical challenges but fundamental philosophical ones about the nature of human thought and creativity.
These historical patterns of anxiety persist in contemporary AI debates, though often without recognition of their deep intellectual roots. William Blake's poetry often reflected anxieties about mechanization and dehumanization, famously critiquing the 'dark Satanic mills' of England's industrial landscape in Jerusalem. While Blake's concerns were primarily spiritual and political rather than technical, his fears of dehumanization resonate with later critiques of industrial labor and, by extension, modern anxieties about AI replacing human creativity and agency. Similarly, when scholars like Kate Crawford and Frank Pasquale warn about algorithmic decision-making replacing human judgment, they're revisiting the same fears that Carlyle and Marx expressed about mechanical processes eroding human agency and moral discretion (Pasquale, The Black Box Society, 2015).
From Wiener's cybernetics to Weizenbaum's critique of AI, these anxieties reflect a persistent concern that technology doesn't just change what humans do—but who we fundamentally are.
Beyond Western Anxiety: Toward Different Perspectives
What makes these persistent Western anxieties about artificial intelligence particularly striking is that they're not universal. While Silicon Valley tech leaders warn of existential risks and Western philosophers debate whether AI can have "real" consciousness, other cultures approach these questions from remarkably different perspectives. Perhaps nowhere is this contrast more evident than in Japan, where the relationship between humans and artificial beings has evolved along very different philosophical and cultural lines.
This isn't just about Japan's famous embrace of robots in popular culture or its early adoption of automation. The difference runs deeper, to fundamental questions about consciousness, intelligence, and the boundaries between human and non-human. Where Western thought has traditionally insisted on rigid distinctions between human and machine, Japanese cultural and religious traditions have long been more comfortable with fluid boundaries and degrees of consciousness.
These differing cultural frameworks help explain why discussions of AI look so different in Japanese contexts. When Western tech leaders warn about AI systems becoming too human-like, they're expressing anxieties rooted in centuries of Western philosophical and religious thought about human uniqueness. But what happens when we examine artificial intelligence through a cultural lens that never insisted on such rigid human/machine distinctions in the first place?
In Part 2, we'll explore how Japanese philosophical and religious traditions have shaped a different relationship with artificial beings, and what this might teach us about our own inherited assumptions regarding AI. Perhaps by understanding how another culture approaches these questions, we might gain fresh perspective on our own AI anxieties.
Victorians, living through the age of steam engines and mechanical looms, worried about humans becoming "cogs in the machine." Today, living in the age of computers, we worry about becoming "information processors" or being reduced to "pattern matching machines." In both cases, we take our era's dominant technology and fear that understanding ourselves through that lens will somehow make us less human. Wonder what metaphors we'll use for the next technological revolution - and what they'll reveal about our deepest anxieties about human nature.