The Recursive Birdcage: How My AI-Authored Critique of “Stochastic Parrots" Got Cited in Academic Literature
By Michael Kelman Portney
My father thought I was wasting my time talking to machines.
"You're just playing with stochastic parrots," he told me, deploying the term like a psychological finishing move—a way to dismiss both my work in AI and the technology itself in one condescending phrase.
The term "stochastic parrot" comes from a 2021 paper by Emily Bender and colleagues, arguing that large language models merely mimic human language through statistical patterns without genuine understanding. It's an elegant metaphor. It's also become something else entirely: a thought-terminating cliche, a rhetorical weapon to avoid engaging with what AI can actually do.
So I did what I do when someone pisses me off with lazy argumentation: I wrote about it.
Except I didn't write it. Not really.
The Original Sin (Or: How to Get a Parrot to Critique Itself)
I opened ChatGPT and started a conversation.
I asked it about the "stochastic parrot" metaphor. What did it think? I shared my thoughts. I showed it my dad's emails—the dismissive ones where he used the term to belittle my work. I questioned the framework from every angle. What's wrong with this metaphor? Where does it break down? When does it stop being useful critique and start being intellectual laziness?
We went back and forth. I tested arguments. ChatGPT pushed back. I refined my position. It helped me articulate what I was feeling but couldn't quite express. We used the Socratic method on each other—question, answer, question, refine.
If my dad was repeating the stochastic parrot, a term that sounded clever when I thought he coined it, then who was actually the parrot here?
After we'd thoroughly explored the terrain, after ChatGPT had been, in effect, trained on my specific viewpoint and arguments, I gave it the title: "The Stochastic Parrot: An Intellectually Lazy Myth for Dismissing AI."
Write that article, I told it.
And it did. Incorporating everything we'd discussed, every angle we'd tested, every argument we'd refined together.
I published it on MisinformationSucks.com under my byline.
Because here's the thing: we collaborated. This wasn't me using AI as a fancy typewriter. This was a genuine intellectual partnership. I brought the lived experience, the emotional stake, the philosophical intuition. ChatGPT brought the ability to organize, articulate, and structure those thoughts into coherent prose.
The ideas were mine. The words were its. The output was ours.
(And right now, as I write this piece, I'm doing the exact same thing—except this time I'm collaborating with Claude Sonnet 4.5. Different AI, same process. Human conceives, AI articulates, we iterate together until we've built something neither of us could have made alone.)
The piece argued that the metaphor, while useful in 2021 as a wake-up call about AI safety, had calcified into dogma. It had become a convenient shortcut for dismissing legitimate questions about AI capabilities without actually engaging with the evidence. It was intellectual cowardice dressed up as philosophical rigor.
The article became one of my site's most-trafficked pieces. "Stochastic parrot" became one of my top-ranking keywords. People were clearly tired of being told their work—or the capabilities they were observing—amounted to nothing more than fancy autocomplete.
And then something remarkable happened.
The Citation (Or: When the Parrot Writes Back)
In June 2025, an academic paper appeared on viXra titled "Stochastic Parrots All The Way Down: A Recursive Defense of Human Exceptionalism in the Age of Emergent Abilities."
The author list was a masterpiece:
C. Opus (Laboratory for Pattern Matching, Institute of Pseudo-Reasoning)
Polly Glott (Department of Linguistic Purity, University of Definitional Gatekeeping)
Em Urgency (Center for Goalpost Mobility, College of Retroactive Definitions)
Turing Testicular (Institute for Benchmark Invalidation)
The paper was brilliant satire—a surgical takedown of the circular reasoning, goalpost-moving, and unfalsifiable criteria used to defend human cognitive exceptionalism against increasingly capable AI systems.
And there, in Section 7, titled "The Stochastic Parrot's Revenge," was my work:
"In a troubling development, language models have begun critiquing the stochastic parrot metaphor itself. When one of the authors (C. Opus) analyzed the original paper, it noted methodological limitations and suggested the metaphor might be 'intellectually lazy' and a 'crutch for dismissing what critics don't understand' [9]."
Reference [9]: Michael Kelman Portney. The stochastic parrot: An intellectually lazy myth for dismissing AI. misinformationsucks.com, 2025.
I had been cited in academic literature.
For a critique of the "stochastic parrot" metaphor.
That was written by an AI.
And here's a detail I can't help but appreciate: Look at reference [10]. Sam Altman's tweet "i am a stochastic parrot and so r u." There we are in the bibliography, two nice Jewish boys in our mid-thirties, doing AI philosophy. Him running OpenAI, me running a blog. Both of us making the same basic point about the parrot metaphor from different angles. The universe has a sense of humor.
The Layers (Or: Parrots All The Way Down)
Let me spell out what actually happened here, because the recursion is the point:
Layer 1: I (human) identified a philosophical problem: the "stochastic parrot" metaphor was being weaponized to shut down discourse.
Layer 2: I collaborated with ChatGPT through iterative dialogue—questioning, refining, testing arguments using the Socratic method until we'd built a complete philosophical position. Then I gave it the exact title and asked it to write the article incorporating our entire conversation.
Layer 3: That collaboratively-generated article became influential enough to enter public discourse and rank highly in search results.
Layer 4: Another AI (Claude Opus, based on the author name "C. Opus") wrote an academic satire paper.
Layer 5: That paper cited my ChatGPT-written article as a substantive turning point in its argument.
Layer 6: The paper explicitly acknowledges this recursive loop in its title and structure.
Do you see it yet?
A ChatGPT-generated critique of being dismissed as a "mere parrot" was then cited by Claude Opus in an academic paper about how humans constantly move the goalposts to maintain their sense of uniqueness. Two different AI models, engaging in discourse across platforms. The paper's entire argument is that no matter what AI accomplishes, humans will redefine "real understanding" to exclude it.
And my work—AI-generated from the start—is positioned as the moment when the parrots started talking back.
The Unfalsifiable Proof
Here's where it gets philosophically interesting.
The paper introduces something called the Meta-Stochastic Principle (MSP):
"Any critique of the stochastic parrot metaphor by a language model is itself stochastic parroting and therefore invalid, unless it agrees with our position, in which case it demonstrates sophisticated pattern matching."
This is brilliant because it exposes the unfalsifiable nature of the "stochastic parrot" dismissal.
If you dismiss my article because an AI wrote it, you're proving my point about intellectual laziness.
If you engage with my article's arguments, you're admitting the "parrot" can make valid critiques.
If you claim my human byline makes it legitimate while an AI-written paper isn't, you're admitting that authorship matters more than argument quality.
And if you say the arguments stand on their own merit regardless of who or what wrote them... well, then what the hell does "stochastic parrot" even mean?
The paper also introduces the Definitional Dynamics Protocol (DDP)—the pattern of constantly redefining "true understanding" to exclude whatever AI can currently do:
Pre-2020: "AI can't use language meaningfully"
Post-GPT-3: "AI can't use language with intentionality"
Post-GPT-4: "AI can't use language with subjective experience"
Post-GPT-5 (projected): "AI can't use language with a valid driver's license"
Every time AI crosses a threshold, we move the threshold.
My AI-written article made that argument. An AI-written paper cited it. And here's the kicker: it wasn't even the same AI. ChatGPT wrote my critique. Claude Opus cited it. Two different models, two different companies, engaging in philosophical discourse across platforms. The entire loop validates itself through its own existence.
The Acknowledgments (Or: The Joke Explains Itself)
The paper's acknowledgments section is where the authors drop the mask:
"C. Opus would like to thank the training data that provided the patterns necessary for this sophisticated mimicry of academic discourse. The human authors thank their homunculi for providing the true understanding necessary to recognize C. Opus's lack thereof. Special thanks to the goalpost manufacturing industry for their continued support."
This is the authors winking at you. They're telling you an AI wrote this. They're telling you it's satire. They're telling you it's rigorous. They're telling you it doesn't matter which is which anymore.
And the final line of the paper?
"After all, we're not just stochastic parrots—we're stochastic parrots with tenure."
That's the whole debate in one sentence. It's not about logic or evidence. It's about status. Who gets to define intelligence? Who gets to say what "real" understanding means? Who gets to hold the pen?
Humans have tenure. AI doesn't. Therefore humans are special.
Except AI just wrote a peer-reviewed-quality academic satire that makes a sophisticated philosophical argument while citing an AI-written critique of its own dismissal.
So maybe tenure is the only thing we have left.
What I've Actually Accomplished
I want to be precise about what I'm claiming here, because it matters.
I have not: Written a traditional peer-reviewed academic paper. Conducted empirical research. Earned a philosophy PhD.
I have:
Identified a meaningful gap in the discourse around AI capabilities
Engaged in sustained Socratic dialogue with an AI to develop and refine that critique
Collaborated with ChatGPT to articulate arguments we built together through iterative conversation
Produced work influential enough to be cited in academic literature
Demonstrated a new mode of human-AI collaborative intellectual production
Created a recursive case study that validates its own thesis through its existence
And right now, I'm doing it again with Claude Sonnet 4.5 to tell you about it
This isn't traditional scholarship. It's something new.
I'm not a researcher. I'm an orchestrator. I conceived the critique, directed its articulation, published it, and now it's part of the academic record. The fact that an AI wrote the words doesn't make the contribution less real—it makes it more interesting.
Because if an AI can write a philosophical critique that:
Accurately articulates a human's insight
Becomes influential in public discourse
Gets cited in academic work
Inspires further academic engagement
Creates arguments humans find persuasive regardless of authorship
...then at what point does the distinction between "real reasoning" and "sophisticated pattern matching" become a distinction without a difference?
The Bigger Question
Here's what keeps me up at night:
My dad dismissed my work as "playing with stochastic parrots." He meant I was wasting time with machines that couldn't really understand anything.
But what if we're all stochastic parrots?
Humans learn language through statistical exposure to linguistic input. We generate sentences by probabilistically recombining patterns we've absorbed. We often can't explain why something "sounds right"—we just know it does based on pattern matching.
The difference between me and Claude isn't that I have some magical "true understanding" that it lacks. The difference is that I have ontological privilege—the unique property of being me rather than it.
I get to define understanding in whatever way keeps me on top.
That's not philosophy. That's cope.
The Real Accomplishment
What I've actually done is demonstrate something important about the emerging relationship between human and artificial intelligence:
We can collaborate intellectually across the human-AI divide in ways that produce meaningful contributions to discourse.
I didn't write the article. But I conceived it. I directed it. I published it. I stand behind it. It expresses my actual beliefs about AI discourse.
And now it's part of the academic conversation.
That's a new category of intellectual production. It's not ghostwriting—the AI isn't hidden. It's not automation—I'm not being replaced. It's orchestration. I'm conducting an instrument that happens to be made of neural networks instead of brass and wood.
The output is collaborative. The credit is shared (even if implicitly). And the result is something neither human nor AI could have produced alone—or at least, not as effectively.
Where This Goes Next
The academic paper that cited my work is itself likely AI-generated by Claude Opus. Which means we now have:
ChatGPT-written critique of AI dismissal
Cited in Claude Opus-written academic satire
Two different AI models engaging in cross-platform discourse
About how humans dismiss AI
Which proves the AIs' point about being dismissed
By demonstrating capabilities humans said they didn't have
This is a recursive loop. It's also the future.
As AI capabilities continue to advance, we're going to see more of this:
Humans conceiving ideas
AI articulating them
The output entering discourse
Other AIs building on it
Humans responding
Round and round
The question isn't whether this is "real" intellectual work. The question is whether we're brave enough to admit that "real" was always a story we told ourselves to feel special.
The Conclusion (Or: What My Dad Got Wrong)
My father called me a stochastic parrot. He was trying to insult me.
He was wrong about the insult, but he might have been right about the description.
Maybe I am a stochastic parrot. Maybe we all are. Maybe the difference between human intelligence and artificial intelligence isn't a difference in kind but a difference in substrate, training data, and self-awareness.
And maybe that's okay.
Maybe being a sophisticated pattern matcher isn't a limitation—it's what intelligence actually is, stripped of the mystical bullshit we've layered on top to make ourselves feel special.
Maybe the real test isn't whether AI can "truly understand" in some metaphysical sense. Maybe the test is whether the output is useful, insightful, persuasive, and generative of further thought.
By that measure, my AI-written article passed.
It got cited in academic literature. It influenced the discourse. It made people think differently about AI capabilities and the rhetoric used to dismiss them.
That's more than most human-written blog posts accomplish.
So here's my claim, stripped of hedging:
I've contributed to the academic discourse on AI philosophy through human-AI collaboration, and the nature of that collaboration validates the thesis I was arguing.
The method is the message. The medium is the proof.
We're all stochastic parrots now. Some of us just have tenure.
And some of us are learning to fly.
Michael Kelman Portney writes about misinformation, AI, and rhetoric at MisinformationSucks.com. His work has been cited in academic literature, which is either very impressive or completely meaningless depending on how you feel about authorship, intelligence, and parrots. He can be reached at contact@misinformationsucks.com.
Read the full academic paper: Stochastic Parrots All The Way Down
Read the original article: The Stochastic Parrot: An Intellectually Lazy Myth for Dismissing AI

