Weimar notes

Militants of the Communist Spartacist Uprising in Berlin, January 1919

When I finished Frank McDonough’s book The Weimar Years last month I had hoped to write a full review, but time, sickness, and a host of other complications meant I had to be content with a long paragraph in my non-fiction reading year-in-review. Unfortunately, the single-paragraph summation I hold myself to for those posts meant I could only raise a few issues that I had a lot of thoughts about, gesture toward them, and move on.

Here are two—both more or less about terminology or rhetoric—that I’ve been mulling anew under the influence of McDonough’s book and a few unrelated factors.

Right and left

After finishing The Weimar Years and celebrating Christmas, I caught up on some club episode of The Rest is History. The one I was most excited for was a live show Holland and Sandbrook presented at Royal Albert Hall last spring. The subject: Wagner.

Holland and Sandbrook begin their discussion by asking the audience to reconsider what they think they know about Wagner—a smart move. But this problem of terminology popped up immediately. You have heard it said, they essentially say, that Wagner is a right-wing figure, but I say unto you… he’s more of a left-winger. In their actual words, “a hippie.” But Wagner had “some right-wing opinions,” namely anti-Semitism. So: a leftie who is right-wing à la carte?

There’s an argument to be made for Wagner’s place on the left, given his role in the 1848 revolutions and his support for the overthrow of the Saxon monarchy as well as his generally bohemian lifestyle. But what precisely makes his anti-Semitism right-wing? And, from a certain kind of chest-thumping American view, supporting the overthrow of monarchies and seeking to create an all-encompassing national artform out of national myth could be spun as right-wing.

This was a great episode—and I especially appreciated Holland’s argument that Wagner’s music is not in itself anti-Semitic—but that left-right business neatly encapsulated much of my problem with this political frame.

Back to McDonough’s book. McDonough uses the language of left and right throughout but also, importantly, makes it clear what each of the dozen or so major German political parties of the 1920s wanted and stood for. Focusing on 1) goals, 2) methods, and 3) how these changed depending on circumstances explains much more, especially when it comes to the elephant in the room: the National Socialists—and yes, the Socialist part absolutely matters—who were themselves starkly divided along several political axes throughout the Weimar period. This is also considerably more helpful than a simple left-right spectrum when one reads of instances in which supposed opposites like the Nazis and the German Communist Party collaborated against the national government.

The terminology of right and left is rooted in a specific historical moment and the specific problems parties in that place and time argued and fought over. Unmoored from those specifics, I find it unserious. It’s a time-honored way to argue about vibes. I avoid it as much as possible when I teach modern history, invoking it only to give the point of view of people within the narrative I’m telling but not as neutral description.

“Democracy” vs this democracy

I didn’t get into the left-right thing in my paragraph on The Weimar Years but I did raise this question. Here’s what I wrote there:

But the epilogue, in which McDonough specifically blames Paul von Hindenburg for the death of “Weimar democracy,” is a bit of a fumble, as it is abundantly clear from McDonough’s own narrative—and even the earlier parts of the epilogue—that the Weimar Constitution had built-in weaknesses that were bound to weaken and undermine it. McDonough essentially faults Hindenburg for not believing in democracy hard enough. But if “democracy” in the abstract gave Germany this democracy in concrete, stubborn reality, it deserved to go.

Since finishing McDonough’s book I’ve browsed two new histories of Weimar from German historians: Vertigo: The Rise and Fall of Weimar Germany, by Harald Jähner and Fateful Hours: The Collapse of the Weimar Republic, by Volker Ullrich. (In the course of writing this I’ve also learned of a history of Weimar by Katja Hoyer, due out this summer.)

All of these (minus Hoyer’s book, for obvious reasons) do a bit of two-step around the concept of democracy: when the authors write about “democracy,” they sometimes mean the specific constitutional arrangements of the Weimar Republic—who voted, how, under what circumstances, how the Reichstag was constituted, who became chancellor and what authority they had, etc—and sometimes the concept of Democracy, in the abstract. The defeat and destruction of Weimar democracy is a tragedy for them because it means a defeat of Democracy.

But to paraphrase Burke, abstract democracy is not to be found; it inheres in sensible objects. It is useless to talk about Democracy without talking about the specifics of a given democracy, and a given democracy is only as valuable as its institutions and—one ought to add—the people who are using it. And as I wrote above, Weimar democracy was flawed from its inception because of the specifics of how it was designed and functioned and what options it made possible. Unstable, ineffective, hamstrung both by the outcome of a war it wasn’t responsible for and diplomatic agreements to which it consented, and—in the hands of feckless and corrupt politicians of all parties including the supposedly egalitarian socialists—unable to represent the people, it was a failure as a democracy long before Hitler seized power.

As I finished reading McDonough’s account and looked through those two other books—one more obviously leftist-oriented but both moaning and lamenting for Democracy throughout—I had a strange realization. The effect of switching from the collapse of actual Weimar democracy to a lament or apologia for Democracy in the abstract is suspiciously similar to “Real socialism has never been tried.” Democracy attracts the same mulish defensiveness as socialism. Both are the object of unwarranted faith. Neither can be blamed when they fail.

A confession

I started The Weimar Years shortly after the Charlie Kirk murder. Political violence openly celebrated by one side of the culture seemed like a good reason to familiarize myself with the broader narrative of Weimar.

That was a mistake, as looking for a useable, “relevant” past almost always is. Unfortunately historians of Weimar are just as prone to it. In both Jähner’s and Ullrich’s books I did a quick search for “Trump, Donald” in the index and guess what I found? Dumb parallels to the present, mentions of a specific political bugbear that will date their books as badly as a book I have on Mussolini that keeps bringing up George W Bush.

But McDonough doesn’t make this mistake, which is one of the great values of The Weimar Years. Throughout he emphasizes contingency and particularity: that things could have turned out other than how they did, something he makes clear through his detailed political narrative, and that Weimar Germany was a unique time and place offering no easy comparisons to our own. Pretending that Weimar tells us something or gives us insight into our enemies because there was political violence and politicians said mean things about their opponents is glib and misleading—for both the past and the present. The specifics matter.

So I confess to beginning McDonough’s book for the wrong reasons, but am glad I read it and for the sensibility of his approach, which brought me back to my senses. The closer I looked at Weimar, the less I saw of us, now. Which is as it should be. Not that we can’t learn anything from it, but we won’t until we understand it on its own terms.

More to come

I’m still trying to strengthen my grasp of Weimar. Of the two other books I’ve looked at, I may read Ullrich’s on the basis of his two-volume Hitler biography. He intones the ritual laments for Democracy in the portions I’ve read but his treatment of some of the specific topics and people I looked up struck me as more balanced than Jähner’s, which celebrates the hedonism and decay of the time. I may end up holding out for Hoyer this summer, as her book on the German Empire, Blood and Iron, was exceptionally good.

Spurious, horrible, the worst kind

Earlier today I started reading Payment Deferred, a 1926 crime novel from the early career of CS Forester of Horatio Hornblower fame. A curious passage of description from the first chapter, when a long-lost relative arrives at an uncle’s London home and looks around:

For a moment the conversation flagged, and the boy, still a little shy, had leisure to look about him. These were the only relatives he had on earth, and he would like to make the most of them, although, he confessed to himself, he was not greatly attracted at first sight. The room was frankly hideous. The flowered wallpaper was covered with photographs and with the worst kind of engravings. The spurious marble mantelpiece was littered with horrible vases. Of the two armchairs one was covered with plush, the other with a chintz that blended unhappily with the wallpaper. The other chairs were plain bentwood ones. On a table in the window were dusty aspidistras in vast green china pots. In the armchair opposite him sat his uncle, in a shabby blue suit flagrantly spotted here and there. He was a small man, with sparse reddish hair and a bristling moustache of the same colour.

It continues from there at some length, but two things struck me about this passage:

The first is just how vague it is. The room is “hideous.” In what way? When Forester elaborates, we learn that the decor includes “the worst kind of engravings” and “horrible vases” set on a “spurious” mantel. The latter I take to mean that the marble is fake, but why are the vases “horrible”? Are they cheap? Broken? Out of fashion? Badly made? And just what are “the worst kind of engravings?” The 1920s equivalent of Thomas Kinkade? Cuttings from Victorian newspapers? Bookplates from Fanny Hill?

Forester clearly wants to impart the nephew’s impression of cheap, run-down living, but we get a better sense of his emotional response to the room than of what it actually looks like. Hideous, horrible, the worst kind—these could mean almost anything.

And yet—the second thing that struck me—it works. This should be bad writing, but isn’t. I think this is down to two things:

First, the description strengthens as the paragraph goes on, and it does so by becoming more particular and concrete. Compare the “horrible vases” with the “plain bentwood” chairs, the “dusty aspidistras in vast green china pots,” and the uncle himself. Shabbiness, inelegance, and neglect create a powerful but subtle sense not only of the place but the character of the people who live there. This is much better.

Second, even in the vague early parts of the description the verbs are strong. In fact, I think they do most of the work in the first several sentences, which is asking a lot of the repeatedly used to be, which I’ve written about before. But even in passive voice, “was covered with” and especially “was littered with” convey strong visual information of clutter, disorganization, and, again, neglect and further cues about the uncle and his family.

Every writer has his strengths and weaknesses. I’ve read only one other Forester novel, the excellent The Good Shepherd. This was published almost thirty years after Payment Deferred, but the two books share a strong interiority, not so much bringing us into as forcing us, claustrophobically, into the minds of the characters from page one. I remember no defects whatsoever in The Good Shepherd, so my suspicion is that passages like the above are the mark of his early career. He was only 27 when Payment Deferred was published, and it would be another eleven years before the first Hornblower book appeared.

At any rate, I’m already enjoying it, and seeing evidence of future greatness in early imperfection is always instructive.

Made of words

A strange kerfuffle I recently witnessed on Substack (I still don’t know how Substack chooses what to show me and I suspect I never will):

A Catholic philosopher whom I’ll call Magus recently published a book exploring, as far as I can tell, ways to counter the disenchantment and rationalistic, reductivist worldview of scientific materialism afflicting the modern world. All well and good. This book only came to my attention, though, when Magus published a detailed defense of his work rebutting a review by someone I’ll call Simplicio, a former occultist turned wannabe Chesterton Catholic turned bearded Orthodox firebrand.

Simplicio took issue with one of the book’s later chapters, in which Magus gestures toward the esoteric tradition of hermeticism as a possible model for Christians trying to approach the world through its non-material, eternal valence. In the course of his arguments, Magus used the word magic.

These debates spanned several point-counterpoint essays on Substack and magic was the pole around which all the rest of the furor rotated. Specific points of evidence aside—and this post is not a comment on Magus’s book or Simplicio’s laundry list of nitpicks and criticisms thereof—Simplicio would not let go of the word magic, which he equated with Satanism and devil-worship. Christians are forbidden that and Magus is, therefore, a heretic, a serious word Simplicio was very free with.

Magus countered that this was a straw-man argument and that magic is not a univocal word. It can and does and always has meant many more things than Satanism. He invoked specifically the “deep magic” of Aslan which is, in the same book, placed in opposition to the White Witch’s magic. Simiplicio called this evasive—we all know what magic means.

And round and round we went, with Simplicio insisting on a single, narrow, unambiguous meaning of this word and Magus countering hopelessly that not only is Satanism not what he meant, it should have been clear in context that he used magic as a metaphor anyway.

As it happens, Simplicio is the only one of these people I had heard of. I’ve read his previous books and essays with some enjoyment but, the more I’ve read of him, the more I’ve begun to suspect he isn’t very bright. Hence the pseudonym. But I don’t follow or subscribe to either Magus or Simplicio (again, Substack), so discovering this back-and-forth gave me the bystander effect of the proverbial car crash.

But the moment that stood out to me in all the sound and fury was a joke Simplicio made at Magus’s expense. When Magus, insisting on clarifying definitions of this notoriously vague word, wrote that “it depends on what one means by magic,” Simplicio called this a “Petersonian rejoinder.” As in, the once sharp but increasingly confused and confusing Jordan Peterson.

Peterson has always, as a Jungian, been prone to wandering into what Mark Twain called the “luminous intellectual fog” of German thinkers. Sadly, this has only become more the case as he’s made interpreting religion more and more of his brand, a task for which Jung has badly equipped him. His equivocation and hair-splitting in answer to questions as simple as “Do you believe in God?” reached the point of self-parody a while ago.

But the problem there is not Peterson’s ever more convoluted and recursive search for fine distinctions. The problem, probably, is somewhere within Peterson himself. What made him so powerful and refreshing a decade ago was his insistence that definitions matter, that words matter, that precision is a crucial guide toward the truth. All of that is still true regardless of where he ended up.

What came to mind when I read Simplicio’s little dig was a scene in A Man for All Seasons. When Sir Thomas More, who has resigned as Lord Chancellor of England, learns that Henry VIII plans to require an oath of loyalty with regard to his remarriage to Anne Boleyn, we have this exchange:

More: But what is the wording?

Meg [More’s daughter]: What do the words matter? We know what it will mean.

More: Tell me what the words say. An oath is made of words. It may be possible to take it.

A Man for All Seasons is full of argument of various kinds and qualities, with More’s opponents constantly working to entrap him, catch him in contradictions, or simply embarrass him. Here’s a great sample. The movie is very much about words, and as long as More insists that words tell the truth, precisely and accurately, he is unbeatable.

But he also exceptional, as the movie makes clear and as reality continues to reflect.

Listening is not reading

Last week on Substack the perennial argument over audiobooks flared up again: does listening to an audiobook count as reading, and is having listened to a book the same as having read it?

I mentioned the pedant in me in my recent post about The Last of the Mohicans. He is never far from the surface but must be kept in check with regard to colonial New York bridge architecture and whatnot. But on this topic I’m happy to let him off the chain.

No, listening is not the same as reading, and if you’ve listened to an audiobook you haven’t read the book.

This opinion probably provoked a kneejerk reaction in at least some of y’all. These arguments get passionate quickly. But here’s my pedantic take on the whole thing: they shouldn’t. Such passion is misplaced for two interrelated reasons.

The first is the basic semantic fact that listening and reading are different words describing different things. Saying “I read War and Peace last month” when I listened to it in my car is simply untrue. This seems pedantic but it’s an important distinction; we have different verbs for these things for a reason.

The second reason has to do with the reality of reading and listening in and of themselves. These are not the same activity. You are doing different things and different things are happening to you. You can get scientific and neurological about it—as my wife, who has a degree in literacy, can and does, having recently led a professional development based on Proust and the Squid at her school—but common sense proves this, too. I both assign readings to my students and lecture. If there were no difference I could assign only readings or only lectures.

Again, this is both a semantic distinction and an immovable truth, the most important fact in the debate. Everything else is epiphenomenal. And yet if you point out that reading and listening are not the same thing, fans of audiobooks will infer from that distinction a snobbish judgment of inferiority or outright condemnation. But that inference—not to mention the defensiveness that arises from it—does not follow.

So why does this debate keep coming up? I think two factors are at play:

First, the valorization of reading. This is the “Fight evil, read books” school of reading, in which reading is treated as virtuous in itself. What used to be the specialist skill of clerks and chroniclers is now a badge of honor and mark of moral rectitude. This is pure self-congratulatory sentimentalism and should be dismissed as such. Reading is important—you’ll find no dispute on that point on this blog—but it does not make anyone good and, in a society of democratized mass education, it doesn’t even make you special.

Second—and I think the real culprit behind the rage—is the Dominion of Content. Our culture is in the grip of the erroneous assumption that all stories, media, and information are undifferentiated and interchangeable. Note how often the word consume comes up in these arguments. This is a giveaway. Failing to differentiate between reading a story yourself and having it read to you reduces writers’ work to free-floating, gnostic content that can be delivered any old way so long as it gives you some kind of picture in your head. In this view, writers don’t write books, they “produce” “content” at one end of a supply chain and at the other the “content” is simply “consumed.”

Combine content culture with a culture that makes proud little warriors out of people who happen to know how to read and you get a popular incentive to consume books without distinguishing how one has consumed them.

Conversely, put reading in its right place as an important but value-neutral skill (so that readers won’t lord it over audiobook listeners) and stop treating art as mere content to be consumed (so that audiobook listeners distinguish what they’re doing from reading) and the difference between reading and listening ceases to be pointlessly inflammatory.

Which is what I’d hope for. There’s nothing wrong with audiobooks. There’s no reason to be defensive about listening to a book and no reason to bridle at what should be a boring factual distinction. I prefer and always will prefer reading—and from a physical book, not a screen—but I have trained myself to follow and enjoy audiobooks, too. I listen to books that are hard to find and to books I’ve read before but want to enjoy in a new way. I have relatives who listen to books to pass the time on morning walks or while working a long nighttime shift in a patrol car. These are all legitimate and enjoyable—but they’re not reading.

To end on a positive note, everyone litigating this on Substack over the last several days made exactly one point I agree with wholeheartedly: listening to a book is better than just about any other activity you could be filling your time with at present. That’s why I’m always thrilled to recommend audiobooks to those relatives and friends I mentioned, why I’m glad Audible exists, and why I’m mad that AI is trying to conquer audiobooks, too.

Circumlocution-using people

Two relevant entries in Dr Johnson’s dictionary from the Internet Archive here

On my commute this morning I listened to a short podcast interview with a historian who has recently published a biography of one of the less appreciated Founding Fathers. I’m being cagey about the details because she came across as a good scholar doing the hard work of revising historical oversights and misrepresentations, and I don’t want this post to be about her. But read the following, her response to a question about this Founder’s views on slavery, and see if you notice what I did:

I would say that [he] is the only one of the leading Founders who actually took that phrase in the Declaration of Independence seriously, that all people are created equal. He understood that line much more as we do today, as opposed to how his contemporaries saw it. So, yes, he was an enslaver, and he inherited the enslaved people he had from his father, and he started to have—really, he never liked it, but he started to have very serious qualms about it in the early 1770s, and then at his soonest opportunity after the passage of the Declaration of Independence he returned to his plantation . . . and he began the process of freeing the people he enslaved. So he first wrote a manumission deed in the spring of 1777 and it conditionally manumitted all of his—the, the people he enslaved. And then in 1781 he freed a few unconditionally and then in 1786 he freed the remainder unconditionally. And then he really became an abolitionist.

There’s the emphatic but tediously predictable revision of the phrase “all men are created equal,” but that’s a post for another time. No, my concern is the now omnipresent phrase enslaved people and several related words and derivatives.

I’m not sure when I first noticed the prevalence of this phrase but I’m certain it originated in academia and became widespread through legacy media. An article I read in Smithsonian a few years ago was riddled with it, and it is now ubiquitous in books and online articles written by the bien pensants. It’s even turning up in my students’ writing, proof of a successful Newspeak campaign.

I’ll speculate more about how and why this originated, but I have two primary complaints about the phrase enslaved people. The first is that using it results in awkward, contorted English. That Smithsonian article got my attention because in the effort to use enslaved people exclusively in reference to chattel labor in the Carolina low country, the author bent and twisted to accommodate two words where one, which works as both noun and adjective, would have done.

And that’s my second complaint: the phrase enslaved people is unnecessary. English already has a word that means “enslaved person.” That word is slave.

I have seen no mandate or overt push for the use of enslaved person or enslaved people but it is of a piece with other present-day circumlocutions—like “people experiencing homelessness”—meant to emphasize the humanity of certain groups, downplay stereotypes, and not let certain states or behaviors define them.

This is sentimentalism, especially in the case of slavery. Slave is an ugly, unpleasant word. That’s entirely appropriate because slavery was an ugly, unpleasant thing, and it totally defined the existence of slaves. Which raises another potential reason some might use enslaved person—the supposed dehumanizing effect of the word slave. I’d argue the opposite. Slaves are, by definition, human. You cannot enslave animals; that’s what makes treating a person like an animal horrible. That is and always has been the key to the horror of slavery both in reality and as a metaphor. Awkwardly working in people just so we’re clear we mean humans when we talk about slaves is unnecessary. And are we sure we want a gentler way of talking about slavery?

On top of those problems, the word enslaved is also inaccurate. As I’ve kicked this rant around in my head I’ve wanted to argue about connotations: that enslave, as a verb, suggests going from a state of freedom to a state of servitude; it implies a change of status. But arguing about connotations wouldn’t work because that is not implied by the verb enslave, that is what it means. Here’s Dr Johnson defining enslave:

To reduce to servitude; to deprive of liberty.

One of the worst aspects of American slavery specifically was its heredity—the children of slaves being slaves themselves, automatically. A person who was born into slavery has not been enslaved; he is a slave and always has been. A person who has been enslaved, definitionally, used to be free. This was not true of most American slaves, which makes their condition worse.

The fiction deepens when we refer to a slaveowner as an “enslaver” or talk about “the people he enslaved.” Again, with rare exceptions this is untrue. The Founder who was the subject of the interview above did not capture and force anyone into slavery—he inherited slaves who were already slaves.

I think that this is where some ideological ulterior motives begin to show. What enslaver implies is that a slaveowner—a word fastidiously avoided, as are all possessive pronouns (notice that the historian in that interview actually stopped herself when she was about to say “his slaves”)—carried out a continuous act of enslaving on people who should have been free, a Derrida-level word game meant to make the slaveowner sound worse and to muddy the waters.

How has this come about? Some of it, the majority of it, is probably just standard tone policing. This is how all right-thinking people recognize each other. But even for those upon whom the philosophical word games are lost, this is part of the postmodern tendency described by Sir Roger Scruton as attempting to use language “not to describe the world as it is, but to cast spells.” Academics would prefer slavery not exist—understandably!—and so the facts of the past must be rewritten, redefined in light of a metaphysic of equality. And so slaveowners didn’t actually own slaves, and slaves weren’t property. We have to jettison those realities—the things that, through all of history, made slavery an object of horror and slaves the object of compassion—and suggest instead that slavery enforced elaborate socially constructed fictions using the great modern boogeyman, Power.

I’ve written and rewritten this rant over and over in my head for years. That podcast interview finally gave me a useful point to build on. But I’ve gone on longer than I intended or wanted. Enslaved people is an unnecessary circumlocution, the language not of reality but of the faculty lounge. Avoid it. The truth is simpler, blunter, and more powerful.

Hiss boom bah

Several weeks ago I wrote about the dangers of mismatching verbs with the action they’re meant to describe, like the needle of a syringe “digging” into an arm or a rocket propelled grenade “poking” through the door of a Humvee. This danger is especially pronounced with dialogue tags. 

Yesterday I started reading a new novel about a British tank crew in Normandy during World War II. It’s already very good—I hope to have more to say about it here at the end of the year—but this morning I read the following, the response of the tank commander to his crew’s nervous chatter as they prepare to attack a German position:

“Pipe down,” James hissed. “Driver, advance.”

It’s not too pedantic to point out that the phrase “Pipe down,” with its plosives and open-mouthed vowel sounds, is physically impossible to “hiss.” 

What the author is trying to capture here is a tone: the terse, tense order of a commander in a dangerous situation. James is just as nervy as his men. But the strongly onomatopoeic hiss suggests a sound other than what we, in our minds, have already heard him say. Hiss might have worked for “Shut up” or “Hush” or “Shhh!” but not this.

The author might have considered a verb that would have more closely matched the dialogue while still conveying the tone he wanted. Bark is the classic example—as in “barking orders”—but is also too close to a cliché to recommend itself. It also suggests shouting, which James is manifestly not doing. It hasn’t reached that point yet.

Elmore Leonard offers the simplest way out of this conundrum. Among the items in his personal decalogue of writing advice is:

3. Never use a verb other than “said” to carry dialogue.

I agree with this rule probably 98% of the time, because it works. Leonard always preferred to convey tone through what was said rather than describing, secondhand, how it was said. When a writer does this deliberately, it can help make his dialogue better. Relying solely on said removes a potential crutch that can lead to bad writing and gradually renders the dialogue tags invisible, concentrating the reader’s attention on the dialogue itself.

Some writers choose to drop dialogue tags entirely. I admire that kind of artistic constraint but think that’s going too far. Removing the tags means relying on description and stage directions to indicate the speaker in any conversation involving more than two people. Even a writer who is good at this, like Craig Johnson, who uses no dialogue tags in his Longmire mysteries, eventually strains for ways to indicate the speaker. He said is simple and almost invisible, and doesn’t break up the rhythm of the talk itself.

The irony is that said would have worked perfectly well in the above example. “Pipe down,” in the context in which it’s said and coming from the character who says it, conveys the right tone all by itself.

***

Looking forward to more of this novel. I’m getting new tires this afternoon, so I should have plenty of time with it. In the meantime, I’ve decided I should resurrect my old series of scholastic commentaries on Leonard’s rules. The last post I wrote concerned regional dialect. I think the next should concern dialogue tags—and adverbs, the subject of rule #4. For the complete list of Leonard’s rules, see this post from the early days of the blog, in which I compare his with similar rules from Orwell and CS Lewis.

What’s missing from modern sports

My late granddad—who knew how to have fun—and his brother Summie at the Alabama Polytechnic Institute, now Auburn, September 1945. From the auburn University Libraries.

One of my favorite discoveries since dipping my toe into Substack is Ted Gioia, a jazz critic who writes frequently and with great insight on a number of cultural topics I care about. I had planned to write about his “worst writing advice in the world,” and may yet, but over the weekend he shared a post that surprisingly helped give form to an intuition I’ve felt for a long time.

I say “surprisingly” because, as far as I can recall, over the last six years and 600+ posts on this blog, I’ve never written about sports. Here goes.

In a post called “I Say Forbidden Things About Sports,” Gioia tackles a host of problems with the culture of sports generally and college sports specifically, among them corrupt recruiting practices, the wildly out-of-whack priorities of coach pay, the physical devastation meted out to ever-growing numbers of young athletes, the sociopathic lust to win, and the creep of the corporate profit-maximizing motive into the world of the university—a phenomenon not limited to the gridiron. All of these are perversions of what sports are supposed to be about.

No argument there.

The one place where I think Gioia misses something is the single place in the essay where he is most dogmatic. After cataloging some of the failures of college sports, Gioia presents his “Six Intrinsic Benefits of Sports”:

 
 

Again, no objections. This is an excellent list. I have three kids who just started youth basketball and it’s already been a fantastic opportunity to teach them all six of these things, just as my parents taught me through baseball.

But when Gioia writes “End of story—there are no others,” I have to point out one thing missing from the list, the one that I think provides the basis for all the others: fun.

Remember when sports were fun? The language does. The words play and game, the language of the schoolyard, linger vestigially, suggesting the former place of sports in our culture. Sports were not always so serious. Even the word sport and its derivatives are suggestive, not only as a noun (sportsman, he’s a sport) but as an adjective (a sporting chance, sport fishing) and verb (sporting a new haircut, children sporting in a field of flowers). Take a look at the history of the word sometime, and at the many, many ways it’s used now. If you wanted to get high-falutin’ about playing games, you ventured, of necessity, into Greek—athlete, athletics, athleticism.

Gioia is absolutely right about those six benefits, but I’m not sure those benefits are why people play sports—or not why they used to, anyway. Kids don’t play basketball to bond, they play basketball and bond. Kids don’t play baseball to learn restraint and how to follow rules, they learn restraint and how to follow rules in order to play. They don’t lose at tennis to learn to accept defeat gracefully, they accept defeat gracefully so that they can play again and, just maybe, win. And they want to play because it’s fun.

Ed Poss (1927-2017)

The gratuitous, for-its-own-sakeness of sports and games is important, I think. And once that goes, the other knock-on benefits—teamwork, sportsmanship, hard work, grace in defeat—will not last.

Look around. What most strikes me about modern sports is how not-fun it all is. The vitriolic demands to fire coaches, the wrath at defeat (sharply parodied here), the punishing training, the increasingly obscene smacktalk, the psych-ward obsessiveness of sports commentary, the gleeful Schadenfreude when a rival loses, perhaps above all the gambling—none of it is fun. I watch the ways in which people I know participate in sports, either as fans or players, and wonder, Are you enjoying this?

And the not-fun of college and professional sports is oozing downward all the time. Read this essay by Tim Carney for an eye-opening look at this trend through his experience with his son’s travel baseball team. The task for those who want to enjoy sports despite modern sports culture and who want our kids to enjoy sports too is simply to keep it fun. That’s my goal, at least.

There’s a lot more to unpack here, I’m sure. Visit Gioa’s Substack, subscribe, and read the whole post. It’s worthwhile food for thought. When I first read it Saturday I went to the comments to see if anyone had suggested fun as a possible seventh—and most important—intrinsic benefit for Gioia’s list. There was a brisk back-and-forth going on there, but not a mention of fun. A telling omission. I’m glad to say I checked again this afternoon and at least three other people have raised the point I’m making here.

Glad to know I’m not alone. Maybe we could start a team.

For whom?

Inklings James Dundas-Grant, Colin Hardie, Dr Robert Havard, CS Lewis, and Peter Havard on a walking tour, c. 1955

The dangers posed by adverbs in writing fiction—awkwardness, overreliance—is well known. A less obvious problem with adverbs in non-fiction arises when they offer accidental one-word commentary when the author is aiming for dispassionate, nuanced, unbiased narrative. Two examples from very, very good books I’ve read recently:

First, from a book about Lewis, Tolkien, and the Inklings:

Both men enjoyed clubs, but Tolkien especially relished being a part of male-only circles with clever names. It should be pointed out that the view held by Tolkien (and by the vast majority of British culture at this time) was that true friendship was only possible between members of the same gender. For Tolkien and Lewis, this was partially shaped by their generation’s intimate experience with other men in the trenches of war. There were women writers who the Inklings much admired, like Dorothy Sayers and Ruth Pitter, who would very much have been at home with the Inklings. Sadly, women were never part of their official meetings.

Second, from a case study in a book by a religious historian about the theological importance of studying the past:

It is also important to understand the historically complex relationship between various churches and slavery in the late eighteenth- and early nineteenth-century Atlantic world. For instance, Mary Prince had joined the Moravian Church in Antigua. The Moravian missions in Antigua (and elsewhere) did keep slaves, but, paradoxically, the Moravians also ministered to slaves, including to Mary Prince.

To which one might ask: Sadly for whom? Paradoxically for whom?

Not to the Inklings. Not to the Moravians. Sadly here means “sadly to a modern person who expects groups of friends to look like the stock photos on college recruiting pamphlets.” Paradoxically here means “paradoxically to a modern person who has not really thought about how complicated and tangled up the relationships and affections of a world suffused with slavery could be, and were.” Or perhaps they just haven’t read Philemon.

The first passage invites us to imagine some hypothetical world in which the Inklings’ meetings would have been improved by being coed. The second passage actually undermines what it has already said about the complexity of religious groups’ approaches to Caribbean slavery, and suggests as well that those who owned slaves cannot, would not, or should not have ministered to them—which is obviously untrue.

It’s interesting and revealing to me that, in both examples, the adverbs are interjected or parenthetical. They are intrusions of the author’s own time and—possibly but not necessarily—personal perspectives into a past that they have otherwise done an excellent job of describing charitably, with good attention to context and the cultural differences between now and then. The one begins, for example, by pointing out common cultural assumptions and shared historical experiences among the Inklings; the other nests the story of Mary Prince among others equally as complex—of mixed-race abolitionist slaveowners, for example.

Perhaps sadly and paradoxically should be read as a hesitation or lack of confidence. After all, both authors are broaching potentially contentious topics in these passages. The Inklings example especially reads, to me, like something an editor might have insisted on the author addressing. But the result, for the reader paying attention to such things, reads like a slip or a stumble.

Again, both of these come from excellent books, which is why I haven’t identified their titles or authors. But they also offer good examples of why—beyond the usual Strunk & White reasons—you should guard your adverbs closely. Maybe stop and ask For whom? of them more often.

The lightning-bug and the lightning

A recent episode of 372 Pages in which Mike and Conor continue their read through the interminable Tek Kill, the eighth book in a sci-fi detective noir series by William Shatner and ghostwriter Ron Goulart, spotlighted this odd passage:

A tiny needle came jabbing out. It dug into his flesh and delivered a shot of mood-altering drug into his system.

One could point out a number of awkward things in these two sentences, but one of the hosts—I think it was Mike—noted what I did when I heard this: hypodermic needles don’t really dig, do they? At least, one really hopes not.

There’s something off about this description. The verb doesn’t align with what the reader is invited to imagine. Which brought to mind Black Hawk Down.

I last read Mark Bowden’s Black Hawk Down in high school, before the movie came out. I’ve been meaning to reread it for decades now. It’s a brilliant piece of journalism and vividly written, so I don’t want the following to be construed as criticism, but read these short excerpts and see if you notice something that bothered me even as a high school senior when I read it in 2001:

Two of the three men blown out the back were severely injured. One, Delta Master Sergeant Tim “Griz” Martin, had absorbed the brunt of the blast. The [rocket propelled] grenade had poked a football-sized hole right through the skin of the Humvee, blew on through the sandbags, through Martin, and penetrated the ammo can. (p. 115)

Specialist Spalding was still behind the passenger door in the first truck with his rifle out the window, turned in the seat so he could line up his shots, when he was startled by a flash of light down by his legs. It looked like a laser beam shot through the door and up into his right leg. A bullet had pierced the steel of the door and the window, which was rolled down, and had poked itself and fragments of glass and steel straight up his leg from just above his knee all the way up to his hip. He had been stabbed by the shaft of light that poked through the door. He squealed. (p. 125)

Yurek ran across the road to the car to link up with DiTomasso. He passed the alley and saw the downed helicopter to his right. Just as he arrived, the Volkswagen began rocking from the impact of heavy rounds, thunk thunk thunk thunk. Whatever this weapon was, its bullets were poking right through the car. Yurek and the others all hit the ground. He couldn't tell where the shooting was coming from. (p. 168)

The verb poke doesn’t belong in any of these descriptions.

First, poke is just a funny word. You don’t have to subscribe to the whole cellar door theory of sound to realize that. In these intense descriptions of combat, maiming, and death, poke jars on the ear.

Further, poke suggests a small, relatively gentle action. It doesn’t fit what Bowden describes here. An RPG powerful enough to punch “a football-sized hole” through a Humvee shouldn’t be described as poking, nor should bullet fragments and shrapnel poke themselves—an odd reflexive construction—into a soldier’s body. The misalignment in words and meaning is especially clear in the final example, in which a heavy automatic weapon, loud enough to be heard distinctly over the rest of the fighting, is firing through a vehicle at soldiers taking cover behind it.

Finally, the use of poke sticks out—pokes out?—because the rest of the writing is so good. Notice the other verbs Bowden uses to suggest the violence and danger of combat: blow, penetrate, pierce, stab, rock, etc. These are active and vivid verbs and suited to the gravity of the story. Compare the first example above, which is describing the effects of an RPG hitting a Humvee and the men inside, with his initial description of what happened a page before:

The grenade had cut straight through the steel skin of the vehicle in front of the gas cap and gone off inside, blowing the three men in back right out to the street. (p. 114)

Cut is simple, direct, precise, and appropriately violent. Poke is not.

I’ve always figured this was just a case of the writer seeking variety in the thesaurus. One can only describe projectiles destroying targets so many ways. Whatever the case, it was a miscalculation—and a pervasive one. A word search in the Amazon text returned 27 uses of poke in the book. There’s a reason I remember it over twenty years after reading it.

If good writing happens in the verbs, precisely choosing the verbs is paramount, even—or perhaps especially—for good writers. Per Mark Twain:

 
[T]he difference between the almost right word and the right word is really a large matter—’tis the difference between the lightning-bug and the lightning.
 

Ready to spew

Trigger warning: This post contains untranslated French words and phrases. Appropriately, as you may be able to infer.

After some internationally public tableaux generated predictable—and, I think, entirely intentional—online outrage, I saw some equally predictable condemnations of the outraged for doing the thing all the kindhearted internet bien pensants love to condemn: “spewing hate.”

If a cliché is a “dead metaphor,” spewing hate must be the deadest of them all. But where most clichés are merely overused word pictures or verbal shortcuts, this one is also dangerous. J’accuse!

Spew is a very old word, almost unchanged in pronunciation from Old English spíwan and having retained both literal and figurative senses for its entire history. But what’s striking to me about spew is that as vomit or throw up or even puke have become far more commonly used for its literal meaning, its metaphorical use has been whittled down to almost the single expression spew hate. It’s rare now to see spew without hate tagging along behind it.

This is a relatively recent development. Here’s Google’s Ngram viewer for various versions of the phrase:

This particular combination of words originated in the 20th century but has taken off since 2000, especially in its most common form, spewing hate.

This jibes with my observations. I first noticed this phrase during college, when it became the de rigeur description of Mel Gibson’s drunken rant following his 2006 DUI arrest. (The unbroken climb in frequency for spewing hate in the chart above begins in 2005.) Given Gibson’s state of intoxication and what he had to say during his arrest, this was an almost accurate description.

But then I noticed that the phrase wouldn’t go away. To my increasing annoyance, within a few years the advocate of every bad opinion and every person caught saying something mildly rude on camera would inevitably be described as “spewing hate”—regardless of whether they could be described as “spewing” or whether what they had said was hateful. As Orwell and CS Lewis observed, words that get stuck within easy reach of popular use soon become yet more synonyms for something one either does or doesn’t like. They become clichés.

And this cliché isn’t just lazy, unimaginative, or gauche. Given the political and cultural valence it usually has, spewing hate also functions as a thought killer. This is where the metaphorical image does its nastiest work. Someone spewing hate is not communicating, they’re just vomiting, and what they have to say is vomit. It needs no consideration or engagement, just a mop and a man to hustle the sick person out the door.

This makes spewing hate a handy phrase for shutting down debate and preventing argument. And a cliché being a cliché, it is, of course, overused.

Its overuse makes it especially dangerous, for two reasons. First, it prevents legitimate argument. With regard to the events that prompted this post, lots of people have legitimate concerns and complaints, and describing them simply as “spewing hate” is an imperious culture war dismissal. Leave us, hateful paysan. Second—and more insidious—any openminded person who sees through this cliché, who investigates someone accused of “spewing hate” and finds them a reasonable person offering measured argument over legitimate concerns, will be more open to people who actually are in the hate business. It’s not only annoying and thought-killing, it’s self-defeating.

As always with clichés, avoid this one. Don’t use it. Don’t share material that does. Make yourself think about your words. And, in this case, just maybe, you’ll be able to consider someone else’s opinion, too.