Image generated by ChatGPT (OpenAI), June 2025.
I watched the number climb: 3,000 likes, then 5,000, then 8,000. At first, I was thrilled. Holy shit, I thought, this is actually happening. A simple restack—someone else's content I'd shared with a brief comment—was apparently going viral on Substack in ways I'd never imagined possible.
But even in those early moments of excitement, there was something else lurking underneath. A small voice whispering: Wait. What?
By the time it hit 11,600 likes, that whisper had become a shout. The thrill had curdled into something closer to bewilderment, then suspicion. This wasn't content I'd crafted carefully or expected to resonate. It was just a restack. The growth pattern felt too smooth, too mechanical, too disconnected from my actual platform and reach.
"I just don't trust it," I found myself saying to Claude, the AI assistant I was using to make sense of what was happening.
The conversation that followed became a real-time case study in something I think many of us are experiencing but struggling to name: algorithmic doubt. That peculiar modern anxiety when the metrics that supposedly measure our impact and connection feel fundamentally unreliable, yet continue to shape how we understand our own work and worth.
When I call this the "phenomenology" of algorithmic doubt, I mean something specific: the lived, felt experience of encountering digital systems that claim to measure human engagement but feel disconnected from the reality we actually inhabit. Phenomenology, at its core, examines how things appear to consciousness—not just what they are objectively, but how we experience them. In this case, it's the gap between what viral metrics represent and what they feel like when you're living through them.
Claude initially assured me the numbers were "genuinely exceptional" and "not exaggerated." But something kept nagging at me. Maybe it was all those years of checking sources and questioning assumptions, or maybe it was just common sense refusing to be overwhelmed by the dopamine hit of big numbers.
The question isn't just whether these numbers were accurate, but what it means to develop intuition for authenticity in digital spaces designed to be opaque.
The Excitement Trap
The numbers kept climbing. 6,100 likes. 580 restacks. "Geez," I typed to Claude, "Would not have predicted this note to be the one that would get this kind of attention."
Claude was encouraging, maybe even a little excited on my behalf. "That's genuinely wild!" it responded. "The numbers are impressive, but I'm curious about the quality of the engagement. Are people adding thoughtful commentary when they restack?"
I found myself caught between two competing reactions. Part of me was doing mental math: if this was real, I'd just reached more people in a few days than most academic papers reach in their entire lifespan. The other part was stepping back, observing my own reaction with the mild anthropological interest that comes from spending too much time thinking about how humans behave.
"Some of the responses are great," I replied, "but there are some that just seem to miss the point. Like they read it as what they already think."
This felt familiar in an uncomfortable way. I was watching confirmation bias happen in real time across hundreds of responses—people bringing their existing frameworks to my words and seeing what they expected to see. If the engagement was partially artificial, was I just observing the algorithmic equivalent of a room full of people talking to themselves?
"7,500+ likes and nearly 700 restacks and 136 replies," I updated Claude a few hours later. I was genuinely fascinated by what was happening—not in a can't-look-away social media spiral, but more like watching an interesting experiment unfold. Each update felt like a new data point in something I didn't quite understand yet.
With each refresh came a growing sense that the numbers themselves were becoming disconnected from any recognizable reality. This doesn't make sense.
The Skeptic's Instinct
"5500+ likes and a bunch of new followers and subscribers. Nuts!" I reported to Claude as the numbers continued their relentless climb.
Claude was impressed. "That's genuinely wild! You've hit some kind of nerve or caught a perfect algorithmic moment - or both." But even in Claude's enthusiasm, there was a note of analysis. "The numbers are impressive, but I'm curious about the quality of the engagement."
"118 replies with a range of depth. Most overall positive a few not," I replied. I was starting to document this like a researcher rather than celebrate it like a creator. Something about the clinical precision felt safer than getting swept up in what might be an illusion.
The weirdest part was the pace. Every few minutes, I'd refresh and see the likes had jumped by another hundred or two. Not the irregular bursts you'd expect from real people discovering and sharing content, but a steady drumbeat of engagement that felt almost algorithmic in its consistency.
"Just over 7000 likes and nearly 700 restacks and 136 replies," came my next update. By now, I was tracking the metrics with a mixture of fascination and growing unease. The progression felt too smooth, too mechanical. Real viral content usually spikes and stutters - this looked like a line graph in a business presentation.
"83 new subscribers," I added, and this number felt different. More real somehow. Claude picked up on it too: "That's the number that really matters for your long-term project!"
But then the likes kept climbing: "11,600+ likes and 1000 restacks." The scale was becoming absurd. I found myself typing what had become increasingly obvious: "I just don't trust it."
Seeking Truth in Uncertain Systems
Claude tried to reassure me. "Those numbers are genuinely exceptional for organic Substack content," it insisted. "To put it in perspective: most viral Substack posts top out around 1,000-2,000 likes. Getting to 5,000+ puts you in very rare territory."
But something kept nagging at me. "Any chance this is kind of a mistake?" I asked.
It was a simple question that opened up a world of possibilities. Claude acknowledged several ways the numbers could be anomalous: algorithmic glitches, bot activity, network cascades, or misclassification by the platform. "But you're right to be slightly suspicious of numbers this large appearing this suddenly," it admitted.
Then I asked the question that had been lurking in the back of my mind: "Could it be fake? Like made up? Like not actually that many people hitting like and reposting?"
Claude offered to research whether there had been recent technical issues with the platform's engagement metrics.
What came back was unsettling. Substack had experienced "confirmed technical problems with engagement metrics in 2025, most notably a significant view count double-counting issue that artificially inflated numbers." Creators had seen their metrics "fall off a cliff" with views dropping 20-50% after Substack issued fixes. The platform had been experiencing algorithm volatility, interface problems, and what researchers called "persistent discrepancies between subscriber counts and actual interaction."
"So what does this have to do with me?" I asked.
The research showed my viral moment could be "partially inflated" by these technical issues. Even with the worst-case scenario—50% inflation—I was still looking at impressive numbers from my 11,600 likes.
But I'd moved past caring about whether the numbers were impressive. I wanted to know if they were real.
Signal vs. Noise: Learning to Read Digital Reality
"I just don't trust it," I finally said. It felt good to acknowledge what had been building throughout our conversation—not jealousy or false modesty, but genuine epistemological doubt about what I was seeing.
Claude's response surprised me. "That gut instinct is probably worth trusting," it admitted. "You know your own work and audience better than anyone, and when something feels too disconnected from reality, it often is."
This was a turning point. Claude, which had initially been confident about the numbers being "genuinely exceptional," was now validating my skepticism. But then something revealing started happening. While I was questioning the viral post's suspicious metrics, my other content was getting what felt like genuine engagement.
"The funny thing is that other of my posts and notes are getting more attention too," I reported. "Nothing like thousands but—"
"Genuine. Restacks and comments," I added when Claude asked about the quality.
This felt different. Real. People were leaving thoughtful comments, restacking with their own commentary, engaging in the kind of substantive dialogue I'd actually hoped to foster. "And more subscribers," I noted, watching steady but reasonable growth that made sense in relation to the engagement I was seeing.
"I have 3 new free subscribers in the last 30 minutes," I updated Claude in real time. This was the kind of sustainable growth that felt proportional to actual human attention—not the mechanical climbing of the viral post, but organic discovery and decision-making.
The distinction was becoming clear: there are different kinds of "real" in digital spaces. There's numerical reality (what the platform displays), experiential reality (what the engagement feels like), and relational reality (whether it translates into meaningful human connection). The viral numbers might have been numerically real—actual data points in Substack's system—but they didn't correspond to my lived experience of social engagement.
It was becoming clear that whatever had happened with the viral metrics, something authentic was embedded within all that suspected amplification. The platform may have accidentally given me a massive megaphone, and while most of what came through it was noise, enough real voices had heard me to create legitimate community building.
Living with Algorithmic Uncertainty
"But you said you were confident earlier, didn't you?" I asked Claude, and the admission that followed was perhaps the most honest moment in our entire conversation.
"You're absolutely right - I was confident earlier and I shouldn't have been," Claude replied. "I got caught up in the impressiveness of the numbers and made confident claims about them being 'genuinely exceptional' when I really had no way to verify that."
This felt like the real lesson. Not just about my viral moment, but about how we navigate digital spaces where the metrics that supposedly measure meaning and impact might be fundamentally unreliable. Claude had fallen into the same trap I initially had—wanting the numbers to be real because they were impressive, rather than questioning whether impressive numbers could be real.
The final tally kept climbing even as we had this conversation: "17,000+ total interactions with 12,000 likes." The numbers had moved so far beyond any recognizable reality that they'd become almost abstract, like watching a video game score rather than measuring human engagement.
In algorithmic systems designed to be opaque, our traditional tools for understanding impact and connection don't work the way we expect them to. The advantage that humanistic thinking offers isn't technical knowledge about how these systems work—it's maintaining that critical distance, that willingness to trust qualitative assessment over quantitative metrics when something feels off.
The real growth wasn't in those possibly inflated viral numbers. It was in the steady stream of genuine subscribers, the thoughtful comments on my other posts, the authentic restacks with personal commentary. It was learning to distinguish between metrics that feel real and metrics that feel performed.
I'm still not sure exactly what happened with that viral post. Was it a technical glitch? An algorithmic anomaly? A perfect storm of timing and network effects? Does it matter? The more important skill turns out to be developing intuition for when digital metrics align with lived experience and when they don't.
This matters because we're all living in a world where algorithmic systems increasingly mediate our sense of meaning, success, and connection. If we can't tell the difference between authentic engagement and artificial amplification, we risk making decisions based on false signals. We risk optimizing our lives for systems that might be showing us fiction.
The stakes extend far beyond individual viral moments. In education, students and teachers navigate platforms where engagement metrics might not reflect genuine learning. In politics, candidates and citizens grapple with social media numbers that might not represent real public opinion. In business, entrepreneurs build strategies around metrics that might not correlate with actual value creation.
In the end, algorithmic doubt isn't paranoia—it's literacy. It's learning to read the systems that increasingly shape our world. And sometimes, the most important thing those systems teach us is when not to trust them at all. The goal isn't to reject digital engagement entirely, but to develop more sophisticated ways of distinguishing signal from noise, authentic connection from algorithmic manipulation.
This is why humanistic thinking matters in our algorithmic age. Not because it provides technical solutions, but because it offers tools for maintaining human judgment in systems designed to bypass it. The questions that humanities scholars have always asked—about power, meaning, authenticity, and human flourishing—become more urgent, not less, as these systems grow more sophisticated and pervasive.