Everyone around me took such texts very seriously. “But... they're not saying anything!” I would bleat helplessly. No one ever agreed with me, so I kept wondering if I was missing something. Only later did an alternate explanation occur to me: that perhaps the people who agreed with me had all left these sorts of classes for less squishy shores.
That’s what I eventually did too (although you could argue that becoming a statistics major was overcompensating). But it still bothered me that I never found a way to prove that texts like the one above are nonsense. After all, I have to admit that just because I can’t understand a text doesn’t mean it’s meaningless; someone could always insist that I simply haven’t studied that field enough, or that my mental faculties are lacking. And it rankles me to have to essentially leave things at I think you’re wrong, you think I’m wrong; we’ll have to agree to disagree. Because, dammit, I’m right!
I think Richard Dawkins articulated the problem nicely: “No doubt there exist thoughts so profound that most of us will not understand the language in which they are expressed. And no doubt there is also language designed to be unintelligible in order to conceal an absence of honest thought. But how are we to tell the difference?”
Well, I can think of a few approaches. One kind of “test” could operate on the logic that if a field’s experts cannot distinguish between genuine texts in that field and indisputable nonsense, then we must conclude that those texts are equivalent to nonsense. (It’s a little like an inversion of the Turing test, in which the premise is that if we can’t distinguish between a conversation with a computer program and one with a human, then we must conclude that the computer’s intelligence is equivalent to human intelligence.)
That was the logic behind the famous Alan Sokal hoax. Sokal, a physicist at NYU, intentionally wrote a paper consisting of gibberish that mimicked the style and jargon of postmodernism and succeeded in getting it published in a well-regarded pomo journal. However, as much as the hoax delighted me, I'm unwilling to say that what Sokal wrote is indisputably nonsense. He intended it to be nonsense, sure, but someone could always claim that he inadvertently inserted sense into his paper. After all, it’s fairly difficult to write text that has the superficial appearance of meaning without any actual meaning. To be safe, we might need to assume that, as long as a passage is consciously generated, it’s vulnerable to accusations of meaningfulness!
So maybe our faux-pomo writing needs to be unconsciously generated if we’re going to be able to maintain that it’s meaningless. I’m not the first one to come up with this idea: this wonderful “Postmodernism Generator” creates a new essay every time you reload the page, generating text randomly within some parameters of sentence construction and common pomo vocabulary. To my untrained eye, the resulting essays read impressively like the real thing. But still, I’m afraid they might not fool an expert — the non-sequiturs are just a little too glaring.
If it’s too difficult to generate convincing fake texts by starting with meaninglessness, what if we instead started with a real text and then removed some of its meaning? Here’s one way this could work:
I like this approach a lot, but it too is not without its problems. How many words would you need to replace? How could you be sure that your negations weren’t canceling each other out? And if your test subjects are pomo experts, how could you be sure they wouldn’t recognize the passage and be able to tell which was the “right” one just from memory?
None of these tests may be airtight, but I really like the theory behind this approach, and I haven’t given up yet! Let me know of any ideas you have for cleanly separating sense from nonsense, and help me retroactively wipe that smug smirk off of Mr. Black Turtleneck’s face.
NEXT: An entirely different approach to pomo-debunking that I’ve been working on, using information-theory. Stay tuned...