Two notes on craft from Poe

Or, perhaps, one note on convincing storytelling or believability from two different but overlapping angles.

Having read last year about Poe and science and a few weeks ago about Poe and American cities, right now I’m reading a pretty straightforward short biography called Edgar Allan Poe: The Fever Called Living, by Paul Collins. Despite its brisk pace and short length (107 pages), the book takes care to track Poe’s development as a craftsman—of poetry first, then fiction and journalism. Two early passages that caught my eye:

First, from a passage on Poe’s famously savage book reviews:

Poe could also lavish praise; indeed, his appreciations feature some of his most careful thinking about craft. In a generally positive review of Robert Bird’s satirical identity-shifting novel Sheppard Lee, Poe explained that a fantastical narrator must speak “as if the author were firmly impressed with the truth, yet astonished with the immensity of the wonders he relates, and for which, professedly, he neither claims nor anticipates credence.” The author must commit to his conceit, in other words—and yet must also perform a sleight of hand, and not overexplain or make the reader conscious of when the story has shifted into the improbable. Poe was, in fact, airing a central tenet of his own fiction: “The attention of the author, who does not depend upon explaining away his incredibilities, is directed to giving them the character and the luminousness of truth, and thus are brought about, unwittingly, some of the most vivid creations of human intellect. The reader, too, readily perceives and falls in with the writer’s humor, and suffers himself to be borne on thereby.”

Second, on one of the short stories that marks Poe’s maturity as a writer:

Ligeia” returns to two of Poe’s signature themes—liminal states of life and death, and the fluidity of identity—and continues a brilliant use of gothic settings that were curiously old-fashioned even by 1838. Yet Poe does not jest with or even acknowledge these as fictional conventions . . . Instead, “Ligeia” was Poe’s first story to absolutely sustain the voice of the narrator and a belief in the conceit. He never breaks character—not to slip in an ostentatious scholarly joke, not for a sly nudge to the reader, not for grotesque description for its own sake. This disciplined internal logic would become a hallmark of Poe’s craft, and the defining characteristic of the stories that we still read today.

This latter is in contrast to some of Poe’s early stories, which were stylistically accomplished but inconsistent, narrated by nonentities or full of sly asides, wink-wink-nudge-nudge allusions, or showoffy jokes. They do not, in Collins’s words, commit fully to their conceits, and their narrators do not sustain the fevered, convincing voice Poe describes in the first passage because they step back from the dream they’ve created in the mind of the reader to gesture, comment on, or joke about it. The result is inconsistency and a lack of believability.

Consider the intensity of the Poe narrator par excellence, the anonymous narrator of “The Tell-Tale Heart,” or even a more sane, ordinary character like Arthur Gordon Pym. Both describe outlandish, shocking events and horrible violence with a matter-of-factness that makes them instantly convincing, and Poe, master of tone and pacing, does not pull away or relax his narrators’ hold on the reader. Now compare these to any of the recent Marvel movies—an extreme and probably unfair comparison, but I’m sticking with it. Jokey, unserious, pandering, self-aware and self-deprecating, their drama and emotion diluted by a steady drip of flippancy, their stories are weak as a result.

In sum: in writing a story, commit totally to selling what’s happening as true, and don’t blink or flinch—even once.

To paraphrase Chesterton, who was himself well familiar with Poe, fiction is a game of chicken which no man of honor should decline.

Looking for the big W

Jonathan Winters as Lennie Pike in It’s a Mad, Mad, Mad, Mad World (1963)

Over the weekend I finished watching It’s a Mad, Mad, Mad, Mad World with my kids. First time for them, zillionth time for me. The movie has grown up and aged with me the way a lot of other comedies haven’t, and part of the reason has to be the density of its jokes—slapstick, sight gags, visual puns (Jimmy Durante literally kicking the bucket), comedy of manners, observational humor, wordplay, banter, innuendo, shock, celebrity cameos, pop culture allusions, over-the-top situational comedy… every trick in the book. My kids were delighted by all the slapstick, especially Mickey Rooney and Buddy Hackett’s airplane antics and every time Ethel Merman got knocked upside down.

But the older I get the more I appreciate the film’s generous leavening of pure irony. The film is shot through with it from the start, the most fundamental irony being the situation itself—a small group of people witness an accident and, by doing the right thing and stopping to help, become privy to a secret (there’s $350,000 buried under “a big W” in Santa Rosita State Park) that sets them all at each other’s throats and precipitates the entire manic, frantic, madcap race to find stolen money. It’s as if the Good Samaritan stopped to help the victim of bandits and ended up taking off to Jericho to find Achan’s buried treasure.

This irony is neatly bookended by one character: Dorothy Provine’s Emmaline Finch. Emmaline firmly opposes going after from the beginning and spends most of the movie fed up with her feckless, incompetent husband (Milton Berle), her domineering mother (Ethel Merman), her idiot brother (Dick Shawn), and everyone else she meets and talks to along the way, including the most seemingly decent of the original group, trucker Lennie Pike (Jonathan Winters). She oozes disgust toward all of them and their low, vulgar, unethical, and illegal quest.

Until, that is the entire bunch arrives at Santa Rosita State Park. At first she refuses even to get out of the (stolen) truck they arrived in, but eventually leaves to find a water fountain to freshen up—and spots “the big W.”

Within minutes, she has told a complete stranger (Spencer Tracy’s Police Captain Culpeper, undercover) what is buried there and has hatched a plan to split the money with him and run away by herself. Emmaline’s standoffishness, it turns out, has always been more about maintaining a certain moral posture against everyone else than about actually doing the right thing. Everyone is a crook when the opportunity presents itself. There is none righteous.

Of course, Emmaline’s plotting is short-lived. Almost immediately, Pike finds one Otto Meyer (Phil Silvers), a schemer who had abandoned him in the desert to go after the money himself, and chases him, intending to settle the score. It’s in the middle of this sub-sub-sub-pursuit that Pike runs through the palm trees that form the big W, and he has his big epiphany.

That’s the irony that got me thinking about all of this: the two people who find the big W—Emmaline, stewing in her own self-righteousness, and Pike, furiously chasing his betrayer—are the ones who aren’t actively looking for it at the time. Sometimes, when you’re looking for something, you have to give up on it to find it.

Some kind of deep spiritual truth? I don’t know. It is first and foremost finely crafted irony. But like all good humor, it resonates with life and existence—that is, it rings true—as does the movie’s larger, climactic irony: all the men who wind up on a fire escape ten stories up, wrestling each other for the suitcase containing the stolen cash, lose what they’re striving so desperately to keep.

On the term "Anglo-Saxon"

Last week, when I took exception to the great Tom Shippey’s arguments for the continued use of the term “Dark Ages” to describe post-Roman or early medieval Europe, I had in mind a counterexample for a follow-up post: “Anglo-Saxon,” a term that tends not to suggest much to the ordinary person and to which very few preconceived notions are attached.

Unless you’re a particular kind of academic.

Briefly, in a technical sense the term Anglo-Saxon is most commonly used three ways:

  • Describing a period, it applies to England from roughly the mid-5th century to the Norman Conquest in the late 11th century.

  • Describing people, it applies to the Germanic peoples who invaded Britain during the “migration period” c. AD 450 and who originated in modern-day Germany, Denmark, and Frisia.

  • As a noun, it is synonymous with Old English, the language spoken in many regional dialects by the people described above.

Other uses, such as for the material culture found at sites like Sutton Hoo or the literature produced by these people, are elaborations on these three basic uses. But Anglo-Saxon as a term for a period in a particular place and the people typical of that period and place has been in common usage for a very long time, right up until today. Just looking at the shelves I can see from my desk, I can see the great medieval historian Frank Stenton’s volume for the Oxford History of England, Anglo-Saxon England (1943), Hilda Ellis Davidson’s great study The Sword in Anglo-Saxon England (1962), Kevin Crossley-Holland’s literary anthology The Anglo-Saxon World (1982), John Blair’s Very Short Introductions volume The Anglo-Saxon Age (1984), and Marc Morris’s excellent The Anglo-Saxons (2021). Even the Nature study regarding Anglo-Saxon genetics that I linked to above uses the term to describe the migration, the period, and the cemeteries excavated as part of the study. This is a respectable term with a long history.

There has, recently,* however, been a move to stop using the term “Anglo-Saxon” within the study of the Middle Ages because of some of the ways the term has been used outside the field. I almost said “popularly used” but, again, I’ve found that very few people have any firm associations with the term. A vague, historical sense of Englishness attaches to it sometimes, and a very few might think of the term WASP, about which more below, but that’s about it. Nevertheless, because the term was sometimes used to designate certain subsets of “Nordic” or northern European racial types by 19th century scientific racists or casually used for people of a certain ethnic background (like the much, much, much vaguer and more insulting “white people” today), it is now “problematic.”

You can find all the kinds of arguments for this view that you’d expect in this piece from Smithsonian last year, which is where I first learned that there was any controversy about it. A few points raised in the essay:

  • The Anglo-Saxons didn’t use the term Anglo-Saxon “much.” The authors try to have this both ways, pointing out that they did use it, but mostly in Latin documents like charters (or the Welsh chronicler Asser’s Life of King Alfred, which uses it in the very first sentence) and hoping you don’t realize that if someone uses a specific term of themselves in a second language they are still describing themselves using that term.

  • The Anglo-Saxons more commonly called themselves Englisc or Angelcynn. True, but historians refer to historical peoples using terms they didn’t themselves use all the time. Witness the Egyptians or Greeks. There are even whole civilizations for whom we have had to make up names, like the Minoans. (It’s also worth noting that the cynn in Angelcynn is our word kin, as in kinship, raising the dread specter of blood-relationship that these authors clearly abhor. Naturally they don’t dwell on this.)

  • The “Saxon” part of Anglo-Saxon is inaccurate because it “was not widely used and only for the Saxon groups,” not all the related Germanic peoples who invaded Britain in the 5th century. Flatly false, as any Welsh or Scottish person (or binge-viewer of “Outlander”) could tell you. The Welsh refer to their Angle enemies as “Saxons” in the 7th-century poem Y Gododdin and, to this day, the Welsh and Scots Gaelic words for “foreigner” or “English” are Saesneg and Sassenach. Who’s being ethnocentric now?

  • The term obscures or erases ethnic minorities living in Britain at the time. There are whole libraries’ worth of controversy about the specific example the authors cite, of the presence of some sub-Saharan Africans in Britain during the period in question, but any argument along these lines is specious. Marginal cases cannot define the whole, and the presence of outsiders among a people group doesn’t make terms describing the predominant people or culture inaccurate. This is akin to some arguments I’ve seen that the term “Norse” is inaccurate because Scandinavians occasionally intermarried with the Sami.

  • There are “more accurate” terms available. There are not. All the terms on offer in the essay are actually less precise and more awkward than Anglo-Saxon. And I’m astonished that one proposed alternative is “early medieval English,” since although “Anglo-Saxon” was never a problem when I was in grad school (see note below) I was specifically cautioned away from the term “English” for this period because of its anachronistic connotations.

  • Racists used it. This is what the authors really want to argue—the kind of guilt-by-association cooties talk that somehow gets respect today—and most of their Smithsonian essay is taken up with examples of Bad People using the term. They even use the phrase “dog whistle,” and you know what I think of that. But the authors’ problem with many of the examples they offer is, tellingly, not really with the use of the term itself but with the motives of the people using it. The authors are practicing Bulverism.

Well, I didn’t intend to get into that much detail here, but that essay annoyed me so much when friends sent it my way last summer that it was hard not to.** I could go on, but I’ll conclude with its crowning stupidity, the opening sentence of what the authors clearly believe to be a trumpet blast of a final paragraph: “Historically speaking,” they write, “the name ‘Anglo-Saxon’ has more connection to white hoods than boar-crested helmets.”***

Let us now turn to intelligent people, and the reason I’m returning to contested terminology a week after I mulled over the Dark Ages.

This week on The Rest is History Tom Holland and Dominic Sandbrook presented a wonderful two-part series on Alfred the Great, and among the many topics they touched on was the term Anglo-Saxon. What began as an aside early in the episode, when Holland noted out that the term could not have been invented as a racist codeword because it was in use in Alfred’s lifetime, turns into a more pointed discussion later on (at approximately 39:45 if you listen here) regarding why there would be any controversy about the term in the first place:

Sandbrook: So, you mentioned earlier on—some people might have found that a bit weird if you don’t follow academic disputes on Twitter—which I advise you not to do—is you mentioned the phrase “Anglo-Saxon” which has now become… incendiary in America. In American academia. People don’t want to call them, they don’t even want to call them the Anglo-Saxons, do they?

Holland: Yeah, so, the word “Anglo-Saxon” has different significations in different countries. So, here it means the Anglo-Saxons. It’s the period—

Sandbrook: Yeah.

Holland: It’s shorthand for the period between the Roman withdrawal from Britain and 1066. It’s been that for a long time. And in France or Germany or the Continent Anglo-Saxon basically means the English-speaking world—

Sandbrook: Well in France it means Margaret Thatcher and McDonalds, doesn’t it? [laughs]

Holland: Exactly. Kind of liberal free-market economics. But there is the use of Anglo-Saxon as, you know, Britain, American, or Australia, New Zealand, and so on, Canada—“the Anglosphere” might be another way of putting it. In America, the word WASPs, White Anglo-Saxon Protestants, um, there’s a sense there that it is used to connote a kind of 19th-century, well, white, Anglo-Saxon, Protestant hegemony. And because that is now seen as something oppressive, therefore there’s a desire to get rid of the very word. It’s seen as providing succor to racists in America. But because America is an imperial country and preponderant, there is an absolute assumption among, I think, too many American academics that their use of a word should have global resonance, and they don’t acknowledge the fact that, firstly, in England “Anglo-Saxon” has the connotation that it does. It does not connote racist supremacy.

Sandbrook: No no no.

Holland: We have the English Defence League, we don’t have the Anglo-Saxon Defence League. And they want to call it “early English.” English is a much more problematic word in the context of Early Medieval History. But the other problem with banning the word Anglo-Saxon is that it ignores the fact that, as we said, that Alfred is using Anglo-Saxon in his charters, and its a word that underpins his entire sponsorship of the entire idea of the Angelcynn, the idea of Angles and Saxons being part of a unitary kingdom, a unity people, that in the long run will give birth to England. And this is looking forward to the future, but it’s also rooted in the past because it’s drawing on Bede’s great work, you know, and he’s writing in Northumbria, the Anglian kingdom of Northumbria, a long time before Alfred. So the word Anglo-Saxon seems to me to be by far the best description of this very complicated period and it seems insane to try to get rid of it. Anyway, that’s my rant.

Sandbrook: No, no, Tom, I couldn’t agree with you more. You’ve never had a better rant on this podcast in this series. As so often, why get rid of—it’s bonkers to get rid of the term that is natural to most people.

Holland: It think there’s a certain, a kind of cultural cringe on the part of too many academics in Britain to truckle to American hegemony. They are—in a way, they need to decolonize themselves, to coin a phrase. They need to stop behaving like colonial subjects, and assuming that what happens in America should automatically determine what happens here.

Sandbrook: I couldn’t agree with you more, Tom.

Me neither.

Anglo-Saxon poses a problem nearly the opposite of Dark Ages—it’s a term not commonly used by ordinary people, allowing it to retain most of its technical precision, but objected to by academics on grounds that only bother academics. These are not good reasons, and the continued American export of American neuroses to other countries and, worse, to the past should not extend to the Anglo-Saxons.

My favorite passage of Mark Twain comes from A Tramp Abroad and is a footnote to the phrase “pretty much”: “‘Pretty much’ may not be elegant English, but it is high time it was. There is no elegant word or phrase which means just what it means.” Anglo-Saxon may not be the preferred term of the elegant in American Humanities departments but it means something specific in a way no other term quite does, and most especially to people outside the university.

Let me conclude by heartily recommending any of the books I mentioned at the top of this post, and by commending to you Part I and Part II of The Rest is History’s Alfred the Great series. It’ll be well worth your time.

Footnotes:

*How recently, I wonder. While I’m sure you could trace objections to Anglo-Saxon further back than the last few years, when I wrote and defended my MA thesis in 2010 neither the two medievalists nor the military historian on my committee ever raised even a question about the term, which I not only used throughout but included in the subtitle to indicate the time, place, and culture I was researching.

**I’ve also been horribly sick all this week, so caveat lector throughout.

***Let me here urge the formulation of a corollary to Godwin’s Law for stupid invocations of the Klan.

The Daughter of Time

The Daughter of Time, the final novel by Josephine Tey (1896-1952), concerns Scotland Yard detective inspector Alan Grant. Having fallen into an open manhole while pursuing a suspect, Grant lies recovering from his injuries in a hospital bed, morosely memorizing the cracks in the ceiling above him, nursing jocular grievances against his two nurses, and longing for something good to read rather than the drivel that friends have provided him.

For lack of anything better to do, he goes through a stack of portraits of historical figures. Grant prides himself on his ability to judge character by “physiognomy,” a gut instinct based on a lifetime of looking at faces, but he is brought up short by the portrait of a man in late medieval clothing, with a sensitive face full of suffering.

A judge? A soldier? A prince? Someone used to great responsibility, and responsible in his authority. Someone too conscientious. A worrier; perhaps a perfectionist. A man at ease in a large design, but anxious over details. A candidate for gastric ulcer. Someone, too, who had suffered ill-health as a child. He had that incommunicable, that indescribable look that childhood suffering leaves behind it, less positive than the look on a cripple’s face, but as inescapable. This the artist had both understood and translated into terms of paint. The slight fullness of the lower eyelid, like a child that has slept too heavily; the texture of the skin; the old-man look in a young face.

The man in the portrait turns about to be King Richard III.

This gives Grant pause. All he knows of Richard III is Shakespeare’s murderous, usurping hunchback, the murderer of the Princes in the Tower, a tyrant risen up against by his own outraged people and justly struck down at Bosworth Field. How could Grant have erred this badly in his instincts and judgment?

The question nags at him. He asks everyone who comes to visit—friends, nurses, the occasional doctor—what they know about Richard III. He gets the same responses: Hunchback, wasn’t he? Stole the throne? And, over and over, Didn’t he kill his nephews, those poor boys in the Tower?

The cold-blooded murder of the Princes is the sticking point for Grant. He seeks evidence for the story in the written record. The one history book available to him in the hospital is an old elementary school textbook kept by one of his nurses, a well-intentioned but half-educated bore. The book contains nothing about Richard beyond what everyone seems to know already. Grant’s sense that something is off deepens. He becomes suspicious. How does everyone know the same rote story about this man? How is everyone so sure of it?

Grant has friends browse London bookshops for biographies and big fat historical surveys and orders specialist titles. He traces Shakespeare’s version of Richard III back to a posthumous book by St Thomas More and digs back further still. More was a child when Richard fell at Bosworth Field; where did he get his information? Marta, the actress friend who first suggested going through historical portraits to pass the time, puts him in touch with Brent Carradine, an unemployed student who does the shoe-leather work in Grant’s investigation—visiting archives, digging through contemporary records, comparing secondary sources with what can be known from the primary sources.

Still supine in his hospital bed, Grant assesses each new item of evidence critically, as a detective, establishing a timeline of events, looking for motive, trying to look beyond hearsay. What was Richard’s relationship with his elder brother, the father of the Princes, like? When were the Princes last seen alive? Where? By whom? What did people say at the time? And if Richard wasn’t responsible for the disappearance of the Princes, who was?

I’m not giving too much away to say that Grant concludes that Richard III was not guilty of the crime. Tey, through Grant, makes a compelling case for his innocence. Who Grant determines is the actual culprit, and why and when he had the Princes killed, is a bit more tenuous, but I’ll leave that to you to decide.

After all, the joy of The Daughter of Time is not the conclusion but the detective work—that is, Grant’s historical research into virtually every assumption behind the popular story of Richard III and every detail of what actually happened. The obsessive quality of the work, of sensing that you’re on the right track, that you’re this close to finding something forgotten or hidden, of getting to know a small set of sources so well that you can mentally play them by feel like the strings on a harp, is vividly conveyed in Grant’s hospital bed investigation. Ideas and theories nag at him until he does something to find out the truth. He can’t sleep. He talks of nothing else. He is so consumed with his investigation that the a continuous, driving source of the novel’s suspense is Grant’s helpless, fevered waiting for the arrival of new sources. And when, after following a trail of evidence, he discovers something, makes a connection between two seemingly disparate facts…

I have read no other book that captures so well not just the work but the thrill of really studying the past.

All of which makes The Daughter of Time not just a remarkably exciting mystery—again, about an injured cop who can’t get out of bed—but a model for how historical research works. Like Grant, you may start with a story that interests or entertains you. Like Grant, you should certainly want to know the truth behind it. And, like Grant, this desire will lead you further back into the past, through generations of secondary sources—many of them endlessly quoting each other and repeating versions of the same stories—to the primary sources, the raw material. Hopefully, to the truth.

However—

This novel is also a case study in the dangers inherent in trying, definitively, to solve thorny historical questions. Grant demands too much of his primary sources, wanting greater consistency and clearer explanatory power than any primary sources can hope to provide. His critical eye and skepticism toward potentially biased sources turns into outright contempt for those that contradict his thesis and toward past historians who have weighed the same evidence and reached different conclusions. And, in the end, he has far more certainty in his theories than is warranted. What Grant is in danger of becoming—like many an historian before him, both professional and amateur—is a crank.

Lightly paced, deftly plotted, well-written, witty, and continuously engaging from beginning to end, The Daughter of Time is a delight. I don’t want to undersell this aspect of the story; it is one of the best, most enjoyable novels I’ve read this year. That it is also a brilliantly designed introduction to how to study the past more deeply and truthfully and, seemingly by accident, a study of the tensions inherent in investigating and correcting historical myths is a wonderful bonus.

There are locked-room mysteries and closed-circle-of-suspect or “country house” mysteries. Here is a mystery that takes place in a single bed and across four and a half centuries, where the country house is all of England, past and present, and the locked room the historical record. I highly recommend it. This is no ordinary mystery and, fortunately for us, and for Richard III, Grant is no ordinary detective.

Leaders unworthy of their people

Tsar Peter the Great awaits the condemned at the gallows in The Morning of the Streltsy Execution, by Vasily Surikov (1881)

From AN Wilson’s Tolstoy: A Biography, on the spiritual tensions inherent in “being Russian,” as quoted by Alan Jacobs at his newly renamed blog here:

 
How can it be that the country chosen by God, or by the destiny which moves nations, or by the unseen inevitability of dialectical materialism, should have produced, in each succeeding generation, a political system which made life hell for the majority of inhabitants and which, every so often, threw up tyrants of truly horrifying stature?
 

Is there any group of leaders anytime, anywhere, as unworthy of the sacrifices of their people as the leaders of Russia?

This realization was a long time coming for me. In the histories of most of the countries I routinely study—Germany, Finland, or, at a greater distance, Britain—the Russians appear as rivals, enemies, invaders, or some combination of all three. But there, in history after history of World War II or Finland’s heroic resistance to Stalin or even the accident at Chernobyl, below the world-historical dimensions of wars and global strategy and ideology, like neglected grains waiting to be gleaned by inference, lie the ordinary Russian people: used, starved, deprived, lied to, placed over and over again in harm’s way, and treated as raw material for daft projects of overnight modernization from Peter the Great with his beard clippers to Stalin with his labor camps and hand-dug canals and collectivized farms. Time and again, the feckless, corrupt, incompetent, and just as often criminal leadership of Russia creates a crisis and, time and again, are only saved through the efforts of anonymous soldiers and civilians.

Putin’s wicked war against Ukraine—and the Russian government’s bungling and mismanagement of everything from logistics and equipment maintenance to a conscription program so dreaded that young men are willing to maim themselves to avoid it—only throws this unworthiness into sharper relief. Here is a regime squandering and perverting the virtues of its people in pursuit of victory in an unjust war.

And yet, somehow, when the time comes and the cause is worthy, these people are willing and able to embrace suffering and sacrifice themselves and do so in appalling numbers.

All of this is an oversimplification, of course (as are the very terms Russia and Russian the way I’m using them), but what I’m describing should be instantly recognizable. And that dogged ability and deep reserve of willingness have to stem from something transcendent, something that renders the relationship of Russians to Russia not cognitive dissonance or Stockholm Syndrome but something so close to the soul as to be beyond human leaders, beyond words, something I might not even be able to grasp. As Wilson puts it later in this passage:

Today [i.e. 1988], we read precisely similar tensions in the utterances and writings of Soviet dissidents, and in particular Alexander Solzhenitsyn, whose hatred of his country’s Government seems almost equally balanced by a fervent patriotism, a tragic knowledge that a Russian can only be himself when he is on his native soil.

That, for what it’s worth, describes any real patriotism worth fighting for. Food for thought.

On the term "Dark Ages"

Tom Shippey, in his recent book Beowulf and the North Before the Vikings, which I’m currently reading and enjoying mightily but have not finished yet:

Modern historians do not like the term “Dark Ages” for the post-Roman centuries. Oxford University Press has even banned its authors from using the phrase, presumably because it seems disrespectful. There are two good reasons for keeping it, however. One is that it’s dark to us. We know very little about the post-Roman period in western Europe: one of the first casualties of the failure of empire was widespread literacy.

The other is that it must have felt pretty dark for many people, as the result of—to quote Professor Ward-Perkins of Oxford’s book The Fall of Rome—“a startling decline in western standards of living during the fifth to seventh centuries.” Many voices will be raised immediately, pointing for instance to the discoveries at Sutton Hoo, and saying, “how can you say such a thing? Look at all that lovely jewelry!” Ward-Perkins’s point, however, is that civilization does not depend on an ability to produce aristocratic luxury items, but on low-cost, high-utility items like pots, tiles, nails, and, of course, coins—all of them familiar in the Roman world but scarce, poor-quality, or non-existent in places like Britain for centuries after.

There is much to both admire and quibble with here, but mark me down at the outset as one of those modern historians who hates the term “Dark Ages.” An old friend once told me about a professor of his at Western Carolina who threatened to dock any student a letter grade for using the term. As far as I’m concerned, that’s a good start.

Nevertheless, Shippey is indisputably correct about these two technical points: there is a clear economic and material downturn during the later centuries of the Roman Empire and the centuries following the Western Empire’s demise; and the period’s dearth of sources, or the simple incompleteness or inherent limitations of our surviving sources (e.g. Gildas, who tantalizes as much as he informs), makes this period dark to us. The latter of these is the stronger argument for using the term.

But again, these are technical points in favor of the term. I think it should also be indisputable that this is only rarely how ordinary people use or understand it. That’s because, in both its origins and its continued common usage, “Dark Ages” is straightforwardly and intentionally pejorative. It is a slur, a fact given away every time the “Dark Ages” are invoked as a byword for everything bad. How often, when a political candidate promises us that his benighted opponent’s policies will “send us back to the Dark Ages,” does that candidate mean “We will return to a period covered by few or no primary sources”? When the devoutly religious are accused of “living in the Dark Ages,” do their attackers mean “You do not produce enough tiles or nails and you use debased or badly minted coins”?

Oxford UP is right—it doesn’t just seem disrespectful, it is.

I admire Shippey for being brazen enough to argue for the continued use of the term (he goes to bat for it at least once in Laughing Shall I Die, one of the best books I read last year), but this is a case where any value the term has for technical precision is cast into impenetrable shadow by its popular usage.

In the meantime, Beowulf and the North Before the Vikings is an excellent study of Beowulf as a much-neglected historical source so far. I hope to review it here once I’ve finished it.

The Thing on City of Man Podcast

This year’s Christian Humanist Radio Network Halloween crossover ends today! This year the topic is the filmography of John Carpenter, and I volunteered for an episode of The City of Man covering the only John Carpenter movie I’ve seen—The Thing.

My friend David Grubbs hosts, and with Carter Smith-Stepper and I he leads us through not only the movie itself but also its source material, the 1938 novella Who Goes There? by John W Campbell Jr, and we discuss how the film adapts and improves upon the original. We also dig further back into the history of Antarctica as a scary place and threat to the existence of humanity, with Edgar Allan Poe’s Narrative of Arthur Gordon Pym of Nantucket and HP Lovecraft’s At the Mountains of Madness, and spend a good while pondering just what it is that makes The Thing so special and so intense: acting, special effects, music, cinematography, and, above all, atmosphere.

The Thing combines a lot of things I love in a genre I don’t usually like, and so I was excited to talk about this movie for this event. Other movies covered on other shows in the CHRN include Halloween, Prince of Darkness, They Live, and The Fog. Definitely check those out—Michial and Danny’s discussion of Halloween actually convinced me to give that movie a try sometime soon.

You can listen to The City of Man on iTunes or any other fine podcasting platform. You can listen to this episode at its very short show notes page at the CHRN website here or—next-day update!—you can listen on the Castos podcast player embedded in this post.

Happy Halloween!

The Man of the Crowd: Edgar Allan Poe and the City

The life of Edgar Allan Poe seems made to be picked apart. Poe tried and failed at so much, crossed paths (and swords) with so many people, told so many different stories about himself and had so many different stories told about him, and wrote so much in so many genres that topical examination not only suggests itself as an approach but can prove unusually fruitful.

Last year I read John Tresch’s new book The Reason for the Darkness of the Night, which reexamines Poe’s life and work in the light of his deep interest in science and his connections to both the scientific establishment and popular perceptions of science in his day. It was a great read, one of my favorites of the year. This year I stumbled across a book I missed when it came out in 2020, The Man of the Crowd: Edgar Allan Poe and the City, by Scott Peeples. I read it with great interest.

The Man of the Crowd is both a short biography of Poe and an analysis of the urban contexts in which he lived almost his entire life and produced all of his work. Peeples divides the book into five chapters, each of which details one of the four cities in which he lived longest: Richmond, where he was fostered by the Allan family after the deaths of his parents; Baltimore, where he had family connections and got his first halting start in the publishing business; Philadelphia, where he came into his own, wrote a great deal of his fiction and poetry, and made a name for himself as a critic willing to start literary spats; and New York, his last stable long-term abode, where his wife died and his work and projects began to collapse around him.

The fifth and final chapter, “In Transit,” follows Poe’s last year and a half, a period spent almost entirely on the road between these cities—still writing, still publishing, still unsuccessfully striving to start his own monthly journal and now unsuccessfully courting a series of new brides.

In each chapter, Peeples captures not only the phases of Poe’s life—a complicated enough task, given its wild ups and downs—but the story of each city. Poe lived at a time of runaway urban expansion, of mass immigration and rapid industrialization, and Peeples succinctly charts how these cities had changed by the time Poe arrived and how they were changing while he lived there. Philadelphia, for example, had grown away from the Delaware River as it industrialized, shifting the city’s cultural and political center of gravity inland and outward, to the suburbs. Poe lived in both parts of the city at various times.

The Man of the Crowd balances this kind of sociological history with Poe’s personal and literary lives remarkably well. Peeples never allows his examinations of each city to overwhelm Poe and his family’s story, nor does he lose sight of the landscape in following Poe. This is the best kind of topical or analytical history, in that the big picture and small picture complement each other perfectly.

So, for example, when looking at how often Poe or the Poes moved (over thirty times in his short forty years), we see the interaction of artistic, commercial, and economic considerations with purely personal ones. Poe often moved his family from neighborhood to neighborhood to save on rent, or because they could not pay the rent, or to be nearer the offices of publishers or journals, but he also moved away from city centers to provide Virginia, his consumptive wife, a healthier environment.

It is the effect of the city on Poe’s personal and family life that proves most poignant. Peeples notes that at the time the Poes lived in Philadelphia and Poe, despite the quality of his work, struggled to hold down a job due to his alcoholic binges, “there were over nine hundred taverns” in the city, “including one [only] a block away” from the Poes’ house. For Poe, crime and disease were not the only hazards of walking across town. Unsurprisingly, he stayed sober longer when living on the outer edges of a city.

Peeples is also alive to the tragic symmetries of Poe’s urban life. Of Poe’s final business trip in 1849, a journey from which he never returned, Peeples writes:

The year before, Poe had tried to die in the city where he was born [Boston]; instead, he died in the city where he had found a career and family. But, in light of his peripatetic life, the location of his death seems less significant than the fact that he died “on the road.” Appropriately, the journey he had begun should have taken him to each of the four cities that shaped his career and where he lived most of that life: leaving Richmond, bound for New York by way of Baltimore and Philadelphia. Like so much of his life, though, this trip didn’t go as planned.

One of the charms of his relatively short study (180 pages not including notes, bibliography, and the like) is the wealth of telling detail Peeples includes. The familiar outlines of Poe’s life story are rounded out and given finer shading by the reminiscences of neighbors, friends, and would-be fiancées. The story of a young boy from one of Poe’s Philadelphia neighborhoods rowing him out to a quiet spot on the Schuylkill to shoot waterfowl was both unexpected and touching, as were details of Poe’s family life as observed by visitors. And, of course, the numerous little things that gave and give each city its unique tone and attitude are well integrated with Poe’s story. By the end you feel you know not only Poe, with all his good qualities as well as his tendency toward pride and self-sabotage, but four major cities as well.

I’ve barely even mentioned Poe’s work or any of The Man of the Crowd’s literary criticism, but that is not the book’s main focus. Peeples mostly avoids deep literary interpretation or speculation about the specific ways a given city or event may have influenced Poe’s work. Mostly. Where he does, he largely cites other scholars, almost as a formality. Was Poe’s later fiction is so violent because he grew up in a city with slave auctions? Or is “The Murders in the Rue Morgue” somehow racially coded because Philadelphia, where he wrote and published it, had a large population of free black barbers? These theories seem obviously silly, and while Peeples doesn’t say so he is also refreshingly non-dogmatic and even openly skeptical about this kind of interpretation. What is most interesting is to note what Poe was working and where and when, and how the disparate pieces of his work fit together in time and place.

Like The Reason for the Darkness of the Night, this study of Poe is a study of his context, and works as a striking dual character sketch: of urban America in the first half of the 19th century—striving, rumbustious, commercial, confidently opinionated, prone to both grandeur and petty strife, and not a little dingy even in its better quarters—and of Poe himself, with all of the same adjectives applying.

The Man of the Crowd is an absorbing and well-written study of a great writer from an unexpected and informative new angle. If you have any interest in Poe or in the history of the United States during Poe’s lifetime, I heartily recommend it.

Shatner, Dante, and the overview effect

In his recently released memoir, William Shatner recounts the unexpected emotional experience of going to space and seeing Earth:

I thought that going into space would be the ultimate catharsis of that connection I had been looking for between all living things—that being up there would be the next beautiful step to understanding the harmony of the universe. In the film “Contact,” when Jodie Foster’s character goes to space and looks out into the heavens, she lets out an astonished whisper, “They should’ve sent a poet.” I had a different experience, because I discovered that the beauty isn’t out there, it’s down here, with all of us. Leaving that behind made my connection to our tiny planet even more profound.

It was among the strongest feelings of grief I have ever encountered. . . . It filled me with dread. My trip to space was supposed to be a celebration; instead, it felt like a funeral.

Later, he writes, he learned that this profound feeling was his experience of the “overview effect,” something commonly felt by astronauts. As summarized by NPR: “The overview effect is a cognitive and emotional shift in a person's awareness, their consciousness and their identity when they see the Earth from space.” Smallness, delicacy, beauty—the overview effect, per its name, gives perspective to a place too big to comprehend in ordinary life.

As is my wont, I immediately thought of Dante, who describes precisely this effect in Canto 22 of Paradiso. Flying through the highest reaches of the heavens with Beatrice, she tells him to look down.

My eyes returned through all the seven spheres
and saw this globe in such a way that I
smiled at its scrawny image: . . .
I saw Latona’s daughter radiant,
without the shadow that had made me once
believe that she contained both rare and dense.
And there, Hyperion, I could sustain
the vision of your son, and saw Dione
and Maia as they circled nearby him.
The temperate Jupiter appeared to me
between his father and his son; and I
saw clearly how they vary their positions.
And all the seven heavens showed to me
their magnitudes, their speeds, the distances
of each from each. The little threshing floor
that so incites our savagery was all—
from hills to river mouths—revealed to me
while I wheeled with eternal Gemini.

This is not only Earth but the entire solar system, from moon (“Latona’s daughter”) to Saturn (Jupiter’s father), and Dante—working purely from imagination six hundred years before the advent of space travel—correctly predicts the shrinking and sharpening perspective that a sight of Earth as a tiny blue orb between his feet would impart. All “our savagery” plays out in nothing but a “little threshing floor.”

“Everyone's overview effect is unique to them,” according to NPR, and Shatner’s, sadly, is a formulaic mélange of environmental admonitions and therapeutic bromides:

The contrast between the vicious coldness of space and the warm nurturing of Earth below filled me with overwhelming sadness. Every day, we are confronted with the knowledge of further destruction of Earth at our hands: the extinction of animal species, of flora and fauna . . . things that took five billion years to evolve, and suddenly we will never see them again because of the interference of mankind. . . .

[The overview effect] can change the way we look at the planet but also other things like countries, ethnicities, religions; it can prompt an instant reevaluation of our shared harmony and a shift in focus to all the wonderful things we have in common instead of what makes us different. It reinforced tenfold my own view on the power of our beautiful, mysterious collective human entanglement, and eventually, it returned a feeling of hope to my heart. In this insignificance we share, we have one gift that other species perhaps do not: we are aware—not only of our insignificance, but the grandeur around us that makes us insignificant. That allows us perhaps a chance to rededicate ourselves to our planet, to each other, to life and love all around us. If we seize that chance.

Dante, by contrast, has no call to action, no language of collectives or harmony or nurturing or “human entanglement” or false humility about “our planet.” He offers pure, unflinching perspective. Confronted with the Earth in all its smallness, Dante

smiled at scrawny image: I approve
that judgment as the best, which holds this earth
to be the least; and he whose thoughts are set
elsewhere, can truly be called virtuous.

Real hope begins with a properly oriented overview effect—it should begin with not only a sense of physical, planetary scale but of eternal perspective, so that even the things Shatner both laments and praises will be seen in their true smallness.

You can read a longer excerpt from Shatner’s Boldly Go at Variety here. NPR talked to him and got more disappointing soundbites, with outside commentary by the man who coined the term “overview effect,” all of which you can read here. The translation of Paradiso XXII is that of Allen Mandelbaum; you can read the whole thing at Columbia’s Digital Dante.

Athens and Sparta... Georgia

The Temple of Hephaestus and the Athenian acropolis c. 1870

Maybe it’s my background in British history, or just growing up in northeast Georgia, but I love placenames and the layers and layers of history you can discern as you dig through them.

The Georgia connection is important. Long ago, I noticed that not only did my homestate have an Athens, the city where I was born and where my family has deep roots, but a Sparta, too. And a Rome. And a Smyrna. And a Cairo.

When I began teaching US History almost ten years ago and regularly explaining the Founding generation’s love, admiration, and emulation of the classical world to students, I remembered these observations and connected them to things I had learned about other states since then—that Cincinnati, Ohio is named after a heroic dictator from the early days of the Roman Republic (and, implicitly, George Washington), that New York has even more Greek and Roman placenames, and so forth. And I developed a pet theory I would occasionally expound to students.

Give someone a lot of spare time and grant money, I thought, and the ability to map the locations and dates of founding of American cities with classical placenames, and I bet they’d cluster noticeably along the frontier of the Early Republic, roughly from the Washington to the Jackson administrations.

And, lo and behold, this week I came across a piece from Antigone, an online classical journal, entitled “Classical Place-names and the American Frontier.” This essay concerns upstate New York specifically, where the author notes 130 classical placenames in use by 1860:

An upstate New York itinerary could take you on a drive from Troy to Ithaca via Utica and Syracuse, with stop-offs off in Camillus, Manlius, Cicero, and Pompey. One could be buried under four feet of snow in Rome. The founder of Mormonism, Joseph Smith, lived in a log cabin in Palmyra. You can read the works of Homer or study the military tactics of Marcellus in places that bear their names.

And the author confirms precisely the guess I made about Georgia’s classical cities: “Classical place names were given to frontier settlements there in the years immediately following the War of Independence. As the frontier moved west, so too did the practice.” He goes on to explain the shady buyout of the Iroquois Confederacy’s land in the upstate and the influx of settlers coming northward and inland from the coast.

Looking at Georgia’s considerably fewer such names, you can still note the same pattern: an early city like Sylvania, founded in 1790, lies in well-established territory between Savannah and Augusta, itself a classical name by way of the Princess Augusta, King George III’s mother. Sparta, founded in 1795, is farther north and west. Athens, founded in 1806 as a college town with a name intentionally meant to evoke Plato’s Academy, is yet farther north and west of that.

The displacement of Indians plays a role here, too, albeit a generation later than in New York. Following the Indian Removal Act in 1830 you get Smyrna (1832) and Rome (1834) in former Cherokee territory in the northwestern corner of the state, beyond the Chattahoochee, and Cairo (1835) in the far southwest.

Look at these cities on a map and mark them in the order they were founded and you see a clear march upcountry from General Oglethorpe’s original enclave on the coast and the Savannah River.

Even Atlanta (1847), with its complicated history, fits this pattern, given its cod classical name (part feminine tweak of Atlantic, which is itself derived from Atlas, and part nod, probably coincidentally or indirectly, to Atalanta). Before taking the name Atlanta, the city was Marthasville (1843), and before that it was Terminus (1837). As the New Georgia Encyclopedia notes, Terminus “literally means ‘end of the line,’” an appropriate name since Terminus was established as mile marker zero on a new railroad built to connect the western interior of the state to the coast (there’s that westward, inland movement again). But it only means that because Terminus was originally a Roman deity who protected boundaries and property lines, a god of ends.

I’ve already started recommending this essay to students, not only because it gratifyingly confirms a pet theory but because it makes abundantly clear the pride of place the classical world had in the imagination of the Early Republic. And not only for obviously learned showoffs like Jefferson and Adams.

“It was part of a wider cultural movement to align the new Republic with Classical ideals,” the author notes, “but it was neither as organized nor as calculated as one might think.” Such naming conventions were not part of a top-down agenda but grassroots:

What is interesting about the Classical place names of upstate New York—and what previous historians who have addressed the subject have overlooked—is that many of them were chosen by the pioneers themselves. Except for the town names of the Military Tract, there was no government initiative or evident persuasion that lay behind their selection. The pioneers in their rough-hewn settlements—far from the centres of education in the coastal cities—were choosing to align themselves with the Classical past.

Even the hardbitten types moving to edge of civilization were well-versed in the classical past and its republican ideals and made those cultural priorities clear in the names they gave their settlements.

And their children. Georgia has both a Homer (1859) and a Homerville (1869). These were founded later than the other examples I’ve given and were named for prominent local men, and so only indirectly for the great blind bard, but consider when these men were born.

Of course, me being me, I couldn’t help but reflect on the change since then—given the option of naming things, Western civilization has gone from Utica and Troy and Ithaca and Rome to Boaty McBoatface and friends in two centuries.

I’ve marked a few cities on a Google Map and embedded it above. If you click through to the full map you can see the dates of each city’s establishment arranged in chronological order. Mouse over the list and the pins will light up in exactly the pattern described. I don’t have the time to do that with with all the New York and Ohio placenames mentioned in the Antigone piece but I hope someone will someday. An animated map would be a stellar classroom resource.

In the meantime, definitely read the entire essay. It’s a concise and insightful look at ordinary the relation Americans from an earlier era had to the classical past and should give us cause to reflect on our own relation to them.

Inadequacy of response

This week one James Harris published an incisive short essay entitled “Criticising the critics” at The Critic. Harris notes the hyper-ideological quality of most artistic criticism today: its obsession with politics and social justice, its “excessive critical emphasis on who is making an artwork,” its resulting attempts to game the system of quality and popularity in the interest of favored art and artists, and its jargon-laden, blinkered, grad school-educated, essentially elitist hivemind.

In 2022, it sometimes feels like all art reviewing has become the World Socialist Web Site—only in that case at least it had the honesty to make its politics explicit in its name. . . .

The whole thing is like encountering a slightly tortured undergraduate who, whilst having some interesting ideas, hasn’t quite worked out how to organize them, and is pretty much useless as a review to anyone who does not think it is the primary role of a Game of Thrones prequel to advance anti-colonialist discourse. It’s a show about dragons.

I recently read Gilgamesh: The Life of a Poem, an interesting biography of the great Sumerian epic by Michael Schmidt. In charting modern responses to this 4000-year old story, Schmidt includes these from a British feminist poet: “I didn’t like [Gilgamesh], on the whole. I hate male Hero stories: the big axes, the (implied) big penises and the big egos: a big turn-off.” And: “I’d only read it again if a woman poet translated it, and, in doing so, radically ‘critiqued’ it.”

Because that’s what translators are supposed to do, of course.

More recently, I finally read Poe’s only novel, The Narrative of Arthur Gordon Pym of Nantucket, in which a young man stows away aboard a whaling ship, survives mutiny, shipwreck, starvation, and cannibalism, and finally penetrates beyond the Antarctic icefields and the terrifyingly alien natives of the South Pole almost to the Pole itself. It concludes with a genuinely nightmarish and haunting cliffhanger. In reading more about it, I dug up this clip from the otherwise excellent PBS documentary Edgar Allan Poe: Buried Alive, in which the narrator and interviewees summarize the novel as “a dark maritime adventure that ends in a violent battle between blacks and whites in the South Seas.”

Well, that is kind of what happens, though the “blacks” in the story are not Africans, as the narration implies.

The talking heads elaborate: “I think one thing that was very much on Poe’s mind was the explosiveness of the slavery debate that was going on in the United States at that time.” And: “That’s probably the thorniest text from Poe on the issues of race and slavery. The story can be read as a kind of racist allegory or as an allegory that is a cautionary tale against the imperialist mentality.”

What struck me most about these responses, beyond the dismissive cod Freudianism of the one and the knowing faculty lounge political deconstruction of the other, was their sheer inadequacy. They do not account for these stories in any substantial way. In both Gilgamesh and Arthur Gordon Pym we read stories told on a mythic scale, full of primal violence, prayers for deliverance, monsters in strange lands, and confrontations with the greatest of all mysteries, Death with a capital D, and… this is what you got out of it?

Years ago on a special Halloween podcast we talked about the “Twilight Zone” episode “Nightmare at 20,000 Feet,” in which William Shatner, having just recovered from a mental breakdown, tries to explain away the otherworldly things he’s seeing on his cross-country flight using the therapeutic psychological jargon of the sanitarium he has just checked out of. This unsuccessful attempt reminded me of one of the stories in Shelby Foote’s Shiloh, in which a young Union army private tries to explain why he ran away during the battle with a repeated refrain: he is not a coward, no, just “demoralized through loss of confidence.”

These are attempts to tame the mystery each has encountered—in both of these cases, fear. In the cases of Gilgamesh and Arthur Gordon Pym, they are much deeper and more complex, though fear plays a prominent part. Adventure, danger, the unknown, God or the gods, and the fear of death are what these stories are “about,” not “issues,” isms, critiques, or genitals.

I am all for interpretation and deep examination of good stories (and a good story can withstand good study indefinitely), but I think it has to begin with a proper response to the story the author is trying to tell. And that requires a kind of openness—a willingness to be overwhelmed by the force of a story running wild in its natural habitat, the imagination, rather than the taming, sorting, and caging instinct of ideological interpretation—that is in unfortunately short supply.

Particularity redux

A few weeks ago, in asking what it is that novels are supposed to do, I brought up the particularity of storytelling. Particularity—specifics, details, “proofs” that the story “is actually happening”—is one of the non-negotiable necessities of good storytelling. Even minimalist fables or didactic stories like Jesus’s parables begin with “a certain man.”

Yesterday I came across this episode of “What’s the Difference?” a YouTube series comparing books to their film adaptations. It’s a relatively new one covering Dr No, the sixth of Ian Fleming’s James Bond novels but the first in the film series. The first half of the video impressed me, doing an excellent job of explaining Bond’s physical, psychological, and—one might suggest—spiritual degradation by that point in Fleming’s novels as well as the reserve of endurance that keeps Bond going.

But then the video’s two narrators, whom I call A and B below, introduce Honey Rider (Ryder in the film) this way: Upon arriving on Crab Key in the film,

A: Bond discovers Honey Ryder collecting seashells on the beach in what would become one of the most famous bikinis of the twentieth century.

B: In the book, Honey Rider is completely nude, save for the knife belt at her waist, and sports a badly broken nose. It’s a real sticking point with her character that she’s ashamed of the nose, and just really wants to be pretty? She also shares with Bond her ambition to move to the US and be a prostitute until she’s rich enough to move back to Jamaica and get married, so… it’s just real in line with what a dude writing a sexy spy novel in the fifties thought of women.

A: Right. But the movie in the sixties wasn’t much better.

There’s a lot going on here—not least the dismissive reference to Fleming as “a dude,” which has become a noticeable leftwing verbal tic—but I want to focus on the idea of Honey Rider as what Fleming “thought of women.” Women, categorically.

I’m not here to defend Fleming’s beliefs or attitudes about sex or the sexes—though I probably have a completely different set of objections to his morals than the people who made this video—but I have to point out one major problem with his facile take on the character: Honey Rider is not women. She is a specific, particular woman.

Honeychile Rider has an entire personal history that she gets to relate, herself, in the course of the novel, and her own independent set of motivations, goals, and needs, and these are specifically her motivations, goals, and needs. She’s smart, tough, and capable even if ignorant of much of the rest of the world, but that’s only because she was orphaned at a young age, left essentially homeless to be raised by an old nanny, and finally sexually assaulted by a violent drunk—which is how she got her broken nose and why she’s so self-conscious about it. She has few options, but she’s doing what she can to get by. She is one of the most well-realized, compelling, and tragic of Fleming’s characters, and that is all down to the specifics of who she uniquely is.

But the video’s creators ignore all this. It’s funnier to pass over this well-rounded, compassionately-presented, and interesting character as just another bimbo dreamed up by an old-timey misogynist. It also fits an acceptable narrative and a particular style of online posturing.

A few months ago I ran across a line from Malcolm Muggeridge in which he presciently criticized “thinking in categories, rather than thinking.” This kind of thinking, especially about storytelling, elides the specifics that are “the life blood of fiction” and collapses the particular into the general, so that you end up the kind of person who sees Honey Rider and thinks only “woman” before moving on to condemn Fleming and Bond. Or perhaps “white,” and then condemning all three.

Talking about specific characters as avatars of entire classes of people is lazy, incurious, unfair to both art and artist, and—perhaps worst of all—destructive of the imagination. If you find yourself talking this way, especially to make a flippant joke, stop.

I wrote a longish Goodreads review of Dr No when I last listened to it about two years ago. You can read that for more on one of Fleming’s most suspenseful, action-packed novels here.