Two great reads on history, memory, and fun

History, by Frederick Dielman, in the Library of Congress Thomas Jefferson Room

I don’t know about y’all, but I occasionally need a reminder of why I do what I do. This is especially the case at the end of a long and busy semester. Here are two pieces on the purpose and value of history that gave me precisely the reminders I needed, and which I want to endorse and recommend:

The Claims of Memory,” by Wilfred M McClay

Wilfred McClay’s is the longer and more philosophical and meditative of these two pieces, and was originally delivered as First Things magazine’s 34th Erasmus Lecture this October.

History, for McClay, is not simply a list of dates to memorize (the high school football coach method) or a jumbled rush of unrelated discrete events (“one damned thing after another”), but a form of memory that transcends and enlarges individual human memory—an attitude dating back at least to Cicero, who wrote that “Not to know what happened before you were born is to be a child forever.” (I have this line on my office door.) McClay agrees, and develops the metaphor further, arguing that not only is memory essential to maturation, but loss of memory is fatally debilitating not only to the individual but the society. He invokes Alzheimer’s as an example of how the loss of one person’s memory can affect multitudes. Recall the concept of transactive memory I wrote about earlier this year.

But memory is also tricky, subject not only to ageing and degradation but to vaguery and distortion, and simply amassing more and more empirically determined data may only make that worse. Drawing on another example of memory gone wrong, a Russian psychological patient who could recall literally everything he had ever seen but could not organize those things into coherent, generalized understanding (what would probably happen to a real Will Hunting), McClay also makes room for the necessity of forgetting. “What makes for intelligent and insightful memory,” he argues, “is not the capacity for massive retention, but a balance in the economy of remembering and forgetting.”

But, McClay notes, “there are crucial differences” between individual memory, even in senility, and history as a profession and a cultural tradition:

No one can be blamed for contracting Alzheimer’s disease, an organic condition whose causes we do not fully understand. But the American people can be blamed if we abandon the requirement to know our own past, and if we fail to pass on that knowledge to the rising generations. We will be responsible for our own decline. And our society has come dangerously close to this very state. Small wonder so many young Americans now arrive at adulthood without a sense of membership in a society whose story is one of the greatest enterprises in human history. That this should be so is a tragedy. It is also a crime, the squandering of a rightful inheritance.

This squandering “goes far beyond bad schooling and an unhealthy popular culture” to a censorious, “imperious” and “ever-grinding machine of destruction and reconstruction” that “makes it difficult to commemorate anything that is more than a few years old.”

The whole proposition of memorializing past events and persons, particularly those whose lives and deeds are entwined with the nation-state, has been called into question by the prevailing ethos, which cares nothing for the authority of the past and frowns on anything that smacks of hero worship or filial piety.

Invoking pietas is really speaking my language. McClay is here describing what Roger Scruton called a “culture of repudiation,” and McClay offers incisive critiques of the way academic faithlessness toward the duty to preserve memory translates into popular indifference or, for the “woke,” outright hostility toward the past.

[W]e are rendering ourselves unable to enjoy such things, unless some moral ­criterion is first met. That inability arises, I fear, from guilt-haunted hearts that are unable to forgive themselves for the sin of being human and cannot bear their guilt except by projecting it onto others. The rest of us should firmly refuse that projection and recognize this post-Christian tyranny for what it is.

This is an excellent, wide-ranging, and thoughtful examination of a problem I care a lot about, and I hope you’ll read it. You can find the whole piece here. A recording of the lecture is also available on Vimeo, though I haven’t been able to play it on my machine. I hope y’all will have better luck.

McClay wrote Land of Hope, a one-volume narrative history of the United States that was one of my favorite books of 2020—a year when our need for appreciative but not uncritical memory became especially apparent. I quoted a longish excerpt in which McClay makes the case for narrative history here.

Make History Great Again!” by Dominic Sandbrook

British historian Dominic Sandbrook’s piece is the shorter and punchier of the two, and begins with a question near to my heart: “Why don’t today’s children know more about history?”

I’ve cared about this topic for a long time—first as I figured out how I came to love history as a kid, then as I figured out how to get my students to love history, and now, even more pressingly, as I figure out how to pass on my love of history and even some of my own history to my own children. It’s an important question for all the reasons McClay lays out in his piece.

Sandbrook suggests that the problem is that history, roped off and quarantined lest anyone catch cooties from old ideas we don’t approve of, has been made uninteresting to children—not only in terms of content, presenting “issues” and “forces” rather than events and personalities, but because of the tone with which this history is presented:

In recent years, the culture around our history has been almost entirely negative. Statues are toppled, museums ‘decolonised’, heroes ‘re-contextualised’, entire generations of writers and readers dismissed as reactionaries. When Britain’s past appears in the national conversation, it’s almost always in the context of controversy, apology and blame. . . .

Against this background, who’d choose to study history? For that matter, who’d be a history teacher? Even selecting a topic for your Year 4 children seems full of danger, with monomaniacal zealots poised to denounce you for reactionary deviation. And all the time you’re bombarded with ‘advice’, often in the most strident and intolerant terms.

This presentation is “at once priggish, hand-wringing and hectoring, forever painting our history as a subject of shame.” It also oversimplifies, and the oversimplified is always deadly dull. Complexity excites, especially once a student is immersed enough in a particular time and place to get the thrill of piecing seemingly disparate parts of a narrative together. But that requires imagination, which rebels at dullness.

Part of this dull simplicity is the prevalence of one permissible narrative, a vision or set of emphases to which it is morally imperative that all others be subordinated. Sandbrook invokes the manner in which the UK’s National Trust suggests educators use England’s stately old country houses to cudgel unsuspecting students:

The National Trust’s much-criticised dossier about its country houses’ colonial connections opens by talking of the ‘sometimes uncomfortable role that Britain, and Britons, have played in global history’, and piously warns the reader that our history is ‘difficult to read and to consider’. The Trust’s Colonial Countryside Project encourages creative writing about ‘the trauma that underlies’ many country houses. In other words, drag the kids around an old property and make them feel miserable. Maybe I’m wrong, but I doubt that’ll make historians of them.

He’s not wrong. Admiring the house and its long-lasting beauty and imagining yourself living there—the natural impulses of a healthy child in an excitingly the concretely alien place—would seem to invite punishment.

Sandbrook examines as well the way American-centric concerns have taken over even the British imagination, all in the name of giving children something actionable and, well—a word I’ve inveighed against here before: “Behind this lurks the spectre of ‘relevance’, a word history teachers ought to treat with undiluted contempt.” Hear hear!

History isn’t about you; that’s what makes it history. It’s about somebody else, living in an entirely different moral and intellectual world. It’s a drama in which you’re not present, reminding you of your own tiny, humble place in the cosmic order. It’s not relevant. That’s why it’s so important.

As much as all of the above had me pounding my desk in approval, it is all preparatory to Sandbrook’s positive recommendations on how to make children interested in history again: story, setting, and people—the narrative elements we are wired to respond to, to build our lives around and to emulate, all of which begins in the molding of the affections and the imagination. And that begins in childhood:

So how should we write history for children? The answer strikes me as blindingly obvious. As a youngster I was riveted by stories of knights and castles, gods and pirates. What got me turning the pages wasn’t the promise of an ‘uncomfortable’ conversation. It was the prospect of a good story. Alexander the Great crossing the Afghan mountains, Anne Boleyn pleading for her life on the way to the scaffold, Britain’s boys on the beaches of Dunkirk, Archduke Franz Ferdinand taking the wrong turn at the worst possible moment... that’s more like it, surely?

Add to all of this “an attitude,” specifically that of an open-minded traveler visiting alien lands—about which more below—with the first and most obvious benefit of travel as his goal:

Exploring that vast, impossibly rich country ought to be one of the most exciting intellectual adventures in any boy or girl’s lifetime—not an exercise in self-righteous mortification. Put simply, it should be fun.

Three cheers.

I’ve quoted extensively from this piece because it’s so good, but there’s more, and Sandbrook’s recommendations at the end are excellently put. Read it for those at least. You can read the whole piece at the Spectator here.

Sandbrook has published several volumes of history both for adults and children. I’m sorry to say I haven’t read them, though I’m awaiting the arrival of his children’s Adventures in Time volume on World War II. He is also—with the great Tom Holland, whose Dominion was my other favorite historical work of 2020—a host of the podcast The Rest is History.

Conclusion

Something that struck me in these pieces is that, at one point in each, both invoke LP Hartley’s celebrated line that “The past is a foreign country.” I use that line, as well as the one on memory and maturity by Cicero above, to open every course I teach, every semester. I find it gets my approach across pretty well and primes the students for our study of the past to amount to more than names and dates.

Nevertheless, CS Lewis wrote in The Abolition of Man, a book especially concerned with the hollowing out of the purportedly educated, that “The task of the modern educator is not to cut down jungles but to irrigate deserts.” It’s true. But the desert is so dry and the digging so relentless that I end most semesters not just weary but exhausted. Spent. Nearly despairing. This semester was no exception.

So I’m grateful to McClay and Sandbrook for breathing some life back into me and reminding me not only of what’s at stake, but how much fun real, good history can be—and should be.

Chesterton on the danger of historical films

Over the weekend I made an unexpected 36-hour trip to Texas and back. On my way home I listened to the latest episode of Bill Simmons’s Rewatchables podcast, a two-hour discussion of Oliver Stone’s JFK. The two hours was more than welcome in the pre-dawn flatlands of northern Louisiana where I listened to it, and fully the first hour turned out to be a thought-provoking discussion of a topic that has been on my mind for weeks and that I’ve been generally concerned about for years: falsehood in historical films.

Simmons and his guests spent a lot of time discussing and comparing the streamlining and condensation inevitable in a historical film with the outright fabrication—especially of major characters—that Stone does throughout, but what really caught my attention and got me thinking was a description very early in the episode of JFK as “provocative, if not wildly irresponsible.” How much responsibility does a filmmaker have, whether to the facts, his audience, or both?

All of which brought to mind the following passage, from “On the FIlms,” a newspaper essay collected in As I Was Saying in 1936, the year of Chesterton’s death:

The second fact to remember is a certain privilege almost analogous to monopoly, which belongs of necessity to things like the theatre and the cinema. In a sense more than the metaphorical, they fill the stage; they dominate the scene; they create the landscape. That is why one need not be Puritanical to insist on a somewhat stricter responsibility in all sorts of play-acting than in the looser and less graphic matter of literature. If a man is repelled by one book, he can shut it and open another; but he cannot shut up a theatre in which he finds a show repulsive, nor instantly order one of a thousand other theatres to suit his taste. There are a limited number of theatres; and even to cinemas there is some limit. Hence there is a real danger of historical falsehood being popularized through the film, because there is not the normal chance of one film being corrected by another film. When a book appears displaying a doubtful portrait of Queen Elizabeth, it will generally be found that about six other historical students are moved to publish about six other versions of Queen Elizabeth at the same moment. We can buy Mr. Belloc’s book on Cromwell, and then Mr. Buchan’s book on Cromwell; and pay our money and take our choice. But few of us are in a position to pay the money required to stage a complete and elaborately presented alternative film-version of Disraeli. The fiction on the film, the partisan version in the movie-play, will go uncontradicted and even uncriticized, in a way in which few provocative books can really go uncontradicted and uncriticized. There will be no opportunity of meeting it on its own large battlefield of expansive scenario and multitudinous repetition. And most of those who are affected by it will know or care very little about its being brought to book by other critics and critical methods.

I find Chesterton’s cautions here compelling. Movies, being visually stimulating and, of necessity, simplified, go down easy. People believe them. Furthermore, movies borrow liberally from each other, meaning that a successful but inaccurate movie’s falsehoods will be reproduced indefinitely. (Think, for example, or the trope of medieval longbowmen firing unaimed volleys into the air as indirect fire, an absurdity that started with Laurence Olivier’s Henry V and continues right down to the present.)

And that’s also assuming a good faith effort on the part of filmmakers to tell what they think is a true story. But filmmakers both then and now often feel no obligation to do so. One odd trend that I’ve noticed in recent years is taking a real historical figure and giving them wholly fabricated homosexual love lives, as with baseball player and renaissance man Moe Berg in The Catcher Was a Spy, Queen Anne in The Favourite, and paleontologist Mary Anning in Ammonite. In the latter case, the director made it explicit that he had appropriated a real person’s life story as revenge for “queer” stories that had been “straightened.” There’s not much an artist with such a sense of grievance won’t do to score points against them, whoever “they” are.

But real people are not just counters in a game artists play to make a point, or elements in a composition that can be rearranged to suit the artist’s taste. They’re real people. And real things are intractable. Toy with them too much, bend and twist and reshape them to fit a prefabricated plot arc or accepted genre conventions, and they may end up unrecognizable—and fatally cliched. (Here’s one notable case.)

Furthermore, a “doubtful portrait,” a Chesterton puts it, of a real person isn’t just inaccurate, it can damage real reputations. Four cases I happen to know about:

  • Boxer Max Baer, a kind-hearted man bothered by the deaths of two former opponents due to head injuries, was depicted in Ron Howard’s Cinderella Man as a pompous thug proud of killing two men in the ring and who makes sexual advances toward James Braddock’s wife. Audiences, oblivious to the character assassination, “whooped and hollered” when Braddock took Baer down at the end. Baer’s son, Max Baer Jr. (of “The Beverly Hillbillies” fame), responded to the movie with “If Howard and [Russell] Crowe were sitting here, I’d hit them.”

  • In the 1964 film Zulu, Private Henry Hook, who earned a Victoria Cross for his actions at the Battle of Rorke’s Drift, is depicted as a drunk, thief, and shirker who makes good in a moment of desperation—a screenwriter’s invention purely for dramatic purposes. The real Henry Hook was a Methodist lay preacher and teetotaler with a spotless disciplinary record. His elderly daughters walked out of the film’s premier.

  • American Gangster, a 2007 movie directed by Ridley Scott (whose presence should always sound warning sirens for historical accuracy), softened drug lord Frank Lucas to make him more palatable and invented adulterous affairs and a bitter child custody battle for detective Richie Roberts, who in real life did not have kids—and was still alive when the film came out.

  • William McMaster Murdoch, First Officer of the Titanic, is depicted by James Cameron (more warning bells) in the 1997 film Titanic as shooting passengers during a stampede for the lifeboats before turning his gun on himself in remorse. The evidence this is based on is sketchy, and, like Henry Hook above, Murdoch had living relatives who took exception, not to mention a hometown with an educational fund in Murdoch’s memory. Cameron and his studio never formally apologized but threw £5000 to the memorial fund. (Titanic made $2.2 billion worldwide.)

Public ignorance and mistaken or outright careless filmmakers are threats to the truth, but I think Chesterton is right in pointing out that it is film’s monopolistic effect that is the gravest danger. The kinds of films audiences flock to and, more importantly, remember are too complicated and expensive to make competition—correcting the record—viable. And so a Zulu comes along and the handful that really know and care about the memory of Henry Hook spend the next sixty years trying to get the real story out.

Of course, anyone who enjoyed the movie can always be directed to a more detailed, comprehensive, and accurate book on the subject. I’ve done this a thousand times if I’ve done it once. But how many people actually take those recommendations? I’m guessing one in a thousand is optimistic. How many people are going to read Andrew Roberts’s 700-page biography of George III when they can yuk at him in Hamilton instead?

Per Chesterton, immediately following the passage quoted above:

 
A false film might be refuted in a hundred books, without much affecting the million dupes who had never read the books but only seen the film.
 

This is something I think about a lot, but I’m not sure I have any answers or solutions to the problem beyond a renewed commitment to truth and a sense of responsibility among filmmakers. Because telling a true story well is not impossible, and those films that successfully fit a true story—inevitably streamlined and simplified but in such a way as to hint at the real story’s complexity—to the medium of film are my beaux ideal. (Here’s one I’ve written about before.)

As for the guys on the Rewatchables podcast, they concluded their deep, thoughtful discussion of Oliver Stone’s paranoid, grievance-driven tissue of distortions and fabrications by agreeing—emphatically—that LBJ and the CIA were behind Kennedy’s assassination. So much for that.

Chesterton vs Brooks

Two quotations on progress, presented without comment.

From David Brooks’s essay “What Happened to American Conservatism?” (AKA “Conservatism is Dead”) in The Atlantic:

 
If [the Democratic Party’s] progressive wing sometimes seems to have learned nothing from the failures of government and to promote cultural stances that divide Americans, at least the party as a whole knows what year it is.
 

Which brought to mind this line from GK Chesterton in an interview with the New York Times, 1923:

 
My attitude toward progress has passed from antagonism to boredom. I have long ceased to argue with people who prefer Thursday to Wednesday because it is Thursday.
 

Pickled monarch

I’m currently a little over a hundred pages into Andrew Roberts’s 700+ page The Last King of America: The Misunderstood Reign of George III. It’s excellent so far, and full of surprises. Here’s one.

This is the twenty-year old George, Prince of Wales writing to his tutor and “dearest friend,” John Stuart, 3rd Earl of Bute in 1758:

What a pretty pickle I should be in a future day if I had not your sagacious counsels.

To “be in a pickle” is not an expression I would have imagined being current in the 1730s, so it surprised to me both 1) that the expression is so old and also 2) that it’s survived to the present. Most such casual idioms go stale pretty quickly—and the more specific they are the more so. I don’t imagine much of anything written nowadays will be intelligible by the end of the century.

But notice that what I at first took to be an oddity of grammatical structure actually suggests the idiom has changed a bit: George says “what a pretty pickle I should be,” not “I should be in.” I can’t be certain that this isn’t just a twenty-year old expressing himself imprecisely (though the volume, style, and erudition of George’s letters suggests otherwise), but if not, the idiom as used by George makes himself the pickle. And in this period, the pickle could be either something soaked in brine or something served up covered in a particular salty sauce. Here’s the Online Etymology Dictionary:

The meaning “cucumber preserved in pickle” first recorded 1707, via use of the word for the salty liquid in which meat, etc. was preserved (c. 1500). Colloquial figurative sense of “a sorry plight, a state or condition of difficulty or disorder” is recorded by 1560s, from the time when the word still meant a sauce served on meat about to be eaten.

Served up garnished and ready to eat—an altogether more precarious image than even that suggested by our continued use of the phrase, and one appropriate to the situation George found himself in following the death of his father but before the death of his hostile and mean-spirited grandfather. At any rate, this figurative sense is much older than the more specific meaning of a pickled cucumber, a usage that was still pretty recent in George’s youth. Presumably, to be “in a pickle,” in which you are the hapless victim of a bad situation, is a mutation based on the baseball/tag game.

Another odd example—from the infinite and labyrinthine cellars and archival storage closets of English—of what makes even seemingly unexceptional aspects of the language fun and diverting.

I wrote about Roberts’s approach to history about a year ago. You can read that here.

Addendum: Speaking of historical oddities and surprises, here’s an offhand observation and ironic note from the very first chapter:

The Prince and Princess of Wales’s Court was a close-knit group that made its own amusements. In 1748, Lady Hervey noted that the young Prince George and the other royal children were playing ‘at baseball, a play all who are or have been schoolboys are well acquainted with.’ She added that ‘the ladies as well as the gentlemen join in this amusement.’ It was a form of rounders that later became popular in America—a game that, ironically, George III played but George Washington did not.

Poe, seashells, and measures of success

I get monthly e-mails from Clemson’s dissertation and thesis service reporting new downloads of my master’s thesis. There are a few dozen every month, which I’ve always found kind of interesting. I’ve even, out of curiosity, turned up blog reviews of that paper online. It’s gratifying to know that all that research is of interest to someone and that people are learning from it. I know I did.

But what I realized at some point in the last year is that all those downloads make my master’s thesis probably—excluding a handful of blog posts that have found their way pretty high into Google’s algorithm—the most widely read thing I’ve ever written. Downloads of that paper outstrip sales of my best-selling novel by a factor of ten.

This doesn’t bother me, by any means—I just find it curious, and even amusing. I certainly don’t think about that thesis as often as I think about my novels and plans for future ones. But it has gotten me thinking about how you measure your own success.

So I was interested to learn from The Reason for the Darkness of the Night, the Edgar Allan Poe biography I referred to last month and hope to review when I get some time, that Poe’s bestselling book in his lifetime, and his only book to get a second edition in his lifetime, was… The Conchologist’s First Book, an introductory textbook on molluscs for which he received no royalties.

If this annoyed or discouraged Poe, there’s no evidence of it. Which points to one of his strengths, even when doing hack work as a writer-for-hire—simply plugging away at the work, moving on to the next project.

While The Conchologist’s First Book has an interesting genesis (read The Reason for the Darkness of the Night for an account of how Poe came to translate/compile/write this book) and Poe actually ended up making serious contributions to the emerging field of conchology, the book is largely forgotten today. When I looked it up on Project Gutenberg I was only the 28th person to download it in the last thirty days. For comparison’s sake, the second volume of the Works of Edgar Allan Poe got nearly 8400 downloads in the same timeframe.

You might not be immediately popular for the work you care most about, but you might end up remembered for the work you do. Food for thought.

“The Tell-Tale Heart” on City of Man Podcast

I’m excited to have hosted City of Man for the Christian Humanist Radio Network’s annual Halloween crossover again. This year the shows are all covering selected stories by Edgar Allan Poe. On this episode of City of Man, David Grubbs, Mathew Block and I discuss the story that began my own lifelong love of Poe’s work, “The Tell-Tale Heart.”

We discuss Poe’s life and work overall, the tragedy that dogged him for forty years, the story itself, the immense craft put into it, how it compares to a similar story from Poe’s corpus, and whether “The Tell-Tale Heart” really deserves its lofty spot in the canon of American short fiction.

N.b.: early in the episode, when I say Poe picked a fight with Nathaniel Hawthorne, I mean Henry Wadsworth Longfellow. Poe would have my hide for that.

This was a great discussion of a great story and I’ve had a good time listening to it again. I hope y’all enjoy, and that you’ll check out some of the resources we recommend at the end.

You can listen to this episode of City of Man on iTunes, Stitcher, or other fine podcasting platforms. Give City of Man a visit at the show’s Facebook page or the Christian Humanist Radio Network’s main site. Be sure to subscribe to the show—and the other shows on the CHRN—so you can catch up on previous episodes and won’t miss future episodes. Coyle’s a hardworking guy and has made City of Man one of the best shows in my podcast feed, and I’m grateful that he asked me to host the crossover again this year.

Thanks for listening!

Dune

Paul AtreiDes (timotheé Chalamet) and Lady Jessica (Reb Ferguson) encounter a sand worm by night

Paul AtreiDes (timotheé Chalamet) and Lady Jessica (Rebecca Ferguson) encounter a sandworm by night in Denis Villeneuve’s Dune

Here’s a movie review I didn’t expect to write. I read Frank Herbert’s novel Dune a few years ago, and while I enjoyed it and it impressed me with its mass of involving detail, I honestly didn’t see what all the fuss was about. When news of Denis Villeneuve’s film adaptation arrived, I was interested—I’ve liked all of Villeneuve’s movies that I’ve seen so far, especially Sicario—but not at all eager. I’d catch it eventually.

Fortunately for me, a good friend insisted we see it at the first available opportunity, and in IMAX. That was last night. Here I am the morning after to tell you to go see it.

Dune depicts the travails of House Atreides, a noble family in a galactic empire 8,000 years from now. The Atreides have been mortal foes of House Harkonnen, a family of sybaritic grotesques who combine love of power with ruthless self-interest. As Dune begins, we learn that the Atreides, led by Duke Leto (Oscar Isaac), have just been given the planet of Arrakis in fief by the emperor, ousting the Harkonnens after generations of monopoly control of the planet and its single valuable resource—spice, a substance with medicinal and hallucinogenic properties that also enables intergalactic space flight. Leto and his wife, Lady Jessica (Rebecca Ferguson) and son Paul (Timothée Chalamet) are unhappy with the move but answer the emperor’s call.

Arrakis is an all-desert planet, hot, inhospitable, and lethal without careful and precisely calibrated technological controls in place. Not only its climate but its native life post a threat, specifically the Fremen, hostile tribal bands that have somehow found a way to eke out a living in the desert, and—towering above all in size and significance, and the source and guardian of the precious spice—the sandworms. Leto’s task now, as the newly appointed lord of Arrakis, is to harvest the spice, defend the harvesters from the sandworms, and defend everything from the attacks of the Fremen. His solution is to adapt his noble family’s traditional techniques to a new environment—rather than the air and sea power they enjoyed on their homeworld, they will develop desert power by gaining the trust of the much-abused Fremen and allying with them.

But despite their best efforts, the Atreides and their followers catch on quickly that they’ve been presented with an unmanageable situation, trying to meet quotas with outdated and poorly maintained equipment and with assassins secreted in the walls of their palace. The Harkonnens have a long reach and clearly want Arrakis back, and, worse, it appears the Emperor is on their side, having set the Atreides up to fail and provide a pretext for a Harkonnen strike. The clash that comes midway through the movie is the beginning of a breakneck series of attacks, flights, and attempts merely to survive that culminates in Paul’s duel to the death with a Fremen challenger.

Does that sound like a lot? It’s the simplest I could make it, and my summary still obscures the fact that it is Paul, the teenaged scion of House Atreides, who is the main character. There is also the forbidding all-female Bene Gesserit cult to which Lady Jessica belongs, a sort of Goddess Illuminati manipulating the noble houses of the empire according to plans of their own; a band of savage imperial mercenaries hired on by the Harkonnens; and Arrakis’s resident ecologist, who is legally obligated to remain a neutral functionary of the empire but chooses to take sides.

And more. And more. And more.

That’s the greatest accomplishment of Villeneuve’s Dune—taking a vast world of intricate politics, religion, trade, ecology, and mythology and making it comprehensible. I had to explain all of that background. In the film, most of it is shown; we pick up on it just by watching, the way movies are supposed to work. While in a few places the filmmakers do cave and have a video lecture instruct Paul directly on particular aspects of Arrakis’s zoology or culture, those places are few and far between. Furthermore, even as we’re learning about this strange, fantastical, and complicated world through the first half, the story keeps moving. This is a two-and-a-half hour movie that, despite the weight of weird information it has to convey, never slows down and goes by in a flash.

It’s also well-cast and acted, with an especially strong supporting cast. My favorites among those were Jason Momoa as Duncan Idaho, Paul’s tutor in the arts of war, and Javier Bardem as the weary but canny Fremen leader Stilgar. Stellan Skarsgard’s obese voluptuary Vladimir Harkonnen is legitimately scary and revolting, a hard combination to pull off. (Near the end I told a buddy in the theater with me that if we got one more scene with the Harkonnens I just might puke.)

Furthermore, the leads are all very good. Oscar Isaac and Rebecca Ferguson, one of my favorite actresses, are excellent as an oddly matched pair of nobles united by their love and duty toward their son. And Timothée Chalamet, an actor I’ve never, ever liked in anything I’ve seen him in (recall that he gave us a “punchable” Henry V in The King), is outstanding here—naively eager, devotedly learning from his father and tutors (when was the last time you saw a kid learning from his elders in a movie?), and rising to the occasion when the crucial moment arrives. He was very good, buoyed by a terrific supporting cast, and actually made Paul more likeable, to me, than the cipher in the book.

Dune is also technically brilliant, with a rich variety of exotic but believable sets, amazing but not impractical costumes and props, and an attention to detail, protocol, and ritual in the way the characters interact that made this feel like a real, lived-in, limitless world. Herbert’s novel accomplished this total immersion cumulatively—by the time you reach the end, the wealth of detail and brilliantly evoked locations and events make you feel like you’ve spent years in this fictional world. The film accomplishes this through texture. All of its visuals are brilliantly tactile. It all feels so real and so right that, near the end, I had grown intolerably thirsty, felt like I had grit in my clothes, and had a nagging worry about inhaling the spice that drifts sparkling through the frame whenever the characters enter the desert. I haven’t felt this physically involved in a movie’s images since Saving Private Ryan.

If I have any complaints, they’re relatively minor. The film is heavily frontloaded with exposition, but gets moving immediately and levels out in the first ten minutes. Zendaya felt out of place as Chani, the Fremen girl Paul has visions and dreams of. She works well enough as the subject of a vision or dream, staring ethereally at Paul just before he wakes, but once she arrives in the flesh her line delivery sounded stiff and forced. Hopefully she’ll be better in the sequel. Also, while the movie looks stunning and is mostly well-shot, night-time scenes near the end are entirely too gloomily lit by cinematographer Greig Fraser, with faces sometimes entirely in shadow and the iconic sandworm in one scene (see the screenshot above) hard to make out. I had a hard time even telling who was who in one crucial scene.

I’ve also heard one complaint elsewhere that I’ll push back on. The title card at the beginning of the movie reads, pointedly, Dune: Part One. The movie gets us about halfway through the book. At least one critic has complained of the abruptness of the ending, but I actually found the place and the moment in the story where the filmmakers chose to leave off appropriate and satisfying, a natural break in the story. I certainly left looking forward to part two.

There’s much more I could point out—the excellent score by Hans Zimmer; the eerie sound design; the skillful use of IMAX for particular scenes; the stunning real life locations, including Jordanian desert where parts of Lawrence of Arabia were also shot—but what I intended as a short review has gotten long enough. (Addendum: I will also point out that, if intense, Dune is refreshingly clean. This is a brand new sci-fi epic you could watch with your kids.)

Dune is a well-crafted, well-acted, thoughtful, and exciting movie that takes place in a fantastical but grounded and believable world, and takes its time to tell a story and tell it well. And not only that, but the characters, plot, and themes all have substance. That makes a film like Dune vanishingly rare nowadays, and I was heartened not only to be surprised by it, but to watch it in a theatre full of other people who also clearly enjoyed it. Whether you’re an old fan of the books or just like good movies, Dune is well worth your time.

Learning the wrong lessons from architecture

Twitter used to have an occasional trending hashtag inviting people to “confess your unpopular opinion.” Here’s one of mine, offered as a follow-up thought to yesterday’s post on traditional vs. modern architecture.

In the City Journal piece I linked to and quoted from, Catesby Leigh writes:

All three branches of the federal government are headquartered in classical buildings: the U.S. Capitol, the White House, and the Supreme Court. Apart from a three-decade interlude of eclectic Victorian confusion between the Civil War and the mid-1890s, classicism predominated from the Founding until World War II. And it has served the nation brilliantly, defining civic architecture in the public mind.

Later:

Jefferson, good lawyer that he was, was won over by the artistic significance, as authoritative precedents, of ancient buildings—especially the Pantheon, perhaps the Roman Empire’s most influential architectural landmark, and also a gorgeous Roman temple in the southern French city of Nîmes on which he modeled his Virginia Capitol in Richmond. (“Here I am, Madam, gazing whole hours at the [Nîmes temple], like a lover at his mistress,” Jefferson wrote to a Parisian lady while serving as the American minister to France.)

Architecture, and classical architecture in particular, is a language—with vocabulary, syntax, multiple styles, and even regional dialects—and language is an instrument of adornment, narration, declaration—and instruction.

Me, personally, I’m a half-timber and gothic man (with a serious soft spot for traditional Southern farmhouses), and I also love and adore classical architecture for all the reasons I laid out yesterday. But precisely on the grounds that architecture, Goethe’s “frozen music,” silently instructs, I think housing the three branches of the federal government in what we instinctually recognize as temples has been a mistake.

And it is a typically Jeffersonian and American mistake—to think you can mimic sacred architecture without the ghost of an overawing polytheism hanging around, to think you can rationalistically borrow form without keeping the meaning. We intuitively know when we’re in sacred space, and I think two centuries of Americans have learned the wrong lesson from this architecture. Witness the overtly religious rhetoric denouncing the January 6th riot. And don’t get me started on this literal temple to an all-powerful god, colossal enthroned idol and all.

A small point, but not an unimportant one.

Modern architecture—there to be demolished

From the late Sir Roger Scruton’s documentary “Why Beauty Matters,” which I wrote about last year:

When the public began to react against the brutal concrete style of the 1960s, architects simply replaced it with a new kind of junk: glass walls hung on steel frames, with absurd details that don’t match. The result is another kind of failure to fit. It is there simply to be demolished.

You can watch this illustrated with depressing simplicity in this short video on YouTube.

Two recent articles on the topic:

This longish piece by art and architecture critic Catesby Leigh takes Donald Trump’s lame duck mandate of classicism as “the preferred and default style” for federal architecture, especially in DC, as a jumping off point. Leigh then examines a flap over architectural style at the University of Virginia that, years before, prefigured the dustup over Trump’s reform. Leigh strikingly compares the self-consciously “exogenous” and “visually abrasive” modernist buildings at UVA—and many, many other places—with classical architecture, noting that while the classical “is not a ‘style’” properly speaking,

It is a visual language of enduring, objective forms wedded to a coherent syntax, a language whose flexibility has permitted stylistic variations in federal architecture ranging from Palladian classicism to art deco. Classical buildings are composed in a manner analogous to the human body, with an organic hierarchy of parts comprising a legible, resonant whole. We are instinctively drawn to such buildings. The same cannot be said of modernist architecture’s dehumanized forms.

From Scruton again (beginning at 49:27 here):

The same kind of criticism [of classical, representational art] is aimed at traditionalists in architecture. One target is Leon Krier, architect of the Prince of Wales’s model town of Poundbury. Designing modest streets, laid out in traditional ways, using the well-tried and much-loved details that have served us down the centuries, Leon Krier has created a genuine settlement. The proportions are human proportions. The details are restful to the eye. This is not great or original architecture, nor does it try to be. It is a modest attempt to get things right by following patterns and examples laid down by tradition.

Modest, well-tried, much-loved, genuine, restful, and human are, as it happens, virtual antonyms of modern architecture.

In his essay, Leigh goes on to note the roots of the preference for classical architecture in America’s early history: the inspirations from still-standing (I’ll come back to that) Greek and Roman examples, the influence of America’s first ally, France, and the values and virtues the proportion, dignity, and order of the style and its variations were meant to embody and encourage.

Nevertheless, Leigh notes, while “Architecture can have a political role—to ennoble the institutions it houses . . . it runs deeper than politics. Goethe famously referred to it as ‘frozen music.’”

Another important line of argument that Leigh develops—important in this ruthlessly and unimaginatively pragmatic age—is that of cost and return on investment.

During last year’s EO controversy, the AIA regurgitated the misleading argument that classical design “can increase the cost of a project (to up to three times as much)” in a letter to Trump. Many laypeople are taken in by this canard, but the truth is that modernism’s proclivity for abstract, unornamented surfaces and details means construction elements must be dimensioned very precisely to keep the weather out. And that is expensive. Classicism allows for greater tolerances because joints can be concealed by pilasters, belt courses, cornices, and so on. Modernist designs can also be harder to make weather-resistant because of their frequent eschewal of time-tested local usages of materials and details. “The end result when compared apples-to-apples (in terms of quality, details, and finished execution),” a gifted classical architect wrote to me not long ago, “is a modern[ist] building will be more expensive, it will have a shorter lifespan, and it will also require higher maintenance and upkeep costs.” This can and should be verified.

Modernist buildings, as Scruton, Tom Wolfe, and others have observed, do not last.

On that point, here’s the second piece that recently caught my eye. At The Critic, Andrew Hunt looks at ugly modernist buildings and their consequences—not only aesthetic and human, but environmental. Hunt:

[Modern politicians] fetishise house-building, but fail to notice that building even a two-bed house creates 80 tonnes of carbon and uses 150 tonnes of materials—the same amount of landfill as an average household creates over 300 years! By comparison, powering your house produces about 2 tonnes of CO2 per year. Even if you could build a truly net zero home tomorrow (which you can’t), it would take forty years to break even.  

A big part of the problem is modern construction materials. Producing concrete (180kg of CO2/tonne) and steel (1.85tonnes of CO2/tonne!) are two of the most ubiquitous and environmentally destructive industries on the planet.

Badly designed and built of poor materials at great cost both financially and in terms of pollution and carbon output, modernist buildings are unloved and rapidly superannuate:

Pre-stressed concrete meanwhile has a lifespan of 50-100 years, meaning many of the first concrete structures have already crumbled into carcinogenic dust. . . . [B]adly built eyesores are being torn down barely a generation after their construction: tower blocks from the 60s, council offices from the 70s and shopping centres from the 90s. That’s billions of tonnes of fossil fuels and mining degradation ending up as landfill.

Hunt links the above to this article, “The problem with reinforced concrete,” and also contrasts problems with modern building materials with the styles and building materials of the past:

sandstone has a carbon footprint of just 77kg/tonne, and wood can be CO2 negative as it locks in carbon. Those old materials last longer as well. There are stone buildings that have been knocking around for more than a millennium—Rome’s Pantheon is 1900 years old. If treated properly, wooden buildings can last almost as long. The world’s oldest inhabited house in the Faroe Islands is 900 years old and built from wood. China’s ornately carved Nanchang Temple has been welcoming Buddhists since the 8th century.

And the kicker, the most striking paragraph in the essay to me, perhaps the most ironic and certainly the bitterest:

Isn’t it odd? Our ancestors built stunning buildings that were environmentally sustainable, have lasted for centuries and are admired and cherished. Almost all of them managed it—Greeks and Romans, Ottomans and Venetians, Tudors and Georgians. Yet they had none of the technology or machinery we have today. In every other sphere of life, we are thrashing our forebears. Why is construction the odd one out? And why have we accepted it for so long?

One hopes we won’t have to for much longer. But, given the vested interests—political, cultural, ideological—noted in both pieces, I’m not holding my breath.

For a gleeful mid-1970s jaunt through modern architecture that slaughters all kinds of sacred cows, starting with the vandals at Bauhaus and continuing through Le Corbusier and accomplices, read Tom Wolfe’s short book From Bauhaus to Our House sometime. In the meantime, watch Scruton’s documentary wherever you can find it, and appreciate the fitting, the settled, and the human wherever it still stands in your neighborhood.

No Time to Die

Daniel Craig as James Bond in No Time to Die

Daniel Craig as James Bond in No Time to Die

It’s clear to me now why No Time to Die’s release was delayed so often and for so long—who, at any point last year, could have really enjoyed a movie about an invisible microscopic threat that originated in a secret lab, that spreads person to person by close contact, that could potentially infect the whole world, and that you can’t remove once it’s tainted you?

The lab, in this case, is not in Wuhan but in London, and the microscopic threat is not a virus but a nanobot technology codenamed Heracles.

The story

At the beginning of No Time to Die, as highly proficient and heavily armed agents infiltrate the lab with the aid of a turncoat scientist, Heracles is referred to only as “the weapon.” Only later does it become clear what the weapon actually is, what kind of damage it’s capable of, and, crucially, who is stealing it.

Following the breach of the lab, M (Ralph Fiennes) sends for 007—but he doesn’t mean James Bond. Bond is five years into a long overdue retirement, whiling away his days fishing and sailing out of a luxury bungalow on the Jamaican coast. He has dropped off the grid following the film’s cold open, a bifurcated tale that is one part flashback for erstwhile Bond paramour Madeleine Swann (Lea Seydoux, returning from Spectre) and one part resolution to Bond’s leftover affection for Vesper Lynd of Casino Royale. We learn some of Madeleine’s tragic backstory, and we see an attempt on Bond’s life by agents of Blofeld (Christoph Waltz) in the Italian town where Vesper is buried. Bond assumes that Madeleine had something to do with Blofeld’s men finding and almost killing him, so he puts her on a train and disappears from her life—or so they both think.

When we catch up to Bond he’s had an unexpected visit from Felix Leiter (Jeffrey Wright) and an even more unexpected visit from the new 007—Nomi (Lashana Lynch). Leiter wants Bond to help him find the missing scientist from the lab heist scene. Nomi wants Bond to stay out of it. Bond can’t resist getting involved, and so he’s off to Cuba.

In Cuba it becomes clear that much more is going on than a simple laboratory theft, and even clearer that Blofeld and SPECTRE are not behind it. Bond and his contact in Cuba, Paloma (Ana de Armas), walk into a trap, and after extricating themselves from that and swiping the scientist from Nomi, Bond and Leiter are betrayed.

From here Bond returns to London and M’s office—now wearing a “Visitor” ID badge—and applies himself to some detective work. He gains an interview with the imprisoned Blofeld but must be accompanied by Blofeld’s psychiatrist, who turns out to be Madeleine. Her discomfort at seeing Bond again after several years is not all down to failed relationship awkwardness. She has recently been approached by an ominous figure from that flashback in the cold open, Lyutsifer Safin (Rami Malek), who has a special request for her.

Events in London don’t end well, and so while Nomi tracks down a lead on a double agent, Bond is off to Norway to reconnect with Madeleine and learn more about the threat that is slowly and lethally revealing itself.

I can’t say much more about the plot without giving things away, but suffice it to say that the movie clips along brilliantly from Norway to its final destination, slowly gathering speed until the final confrontation and revelation.

The good

No Time to Die is a whole lot of movie. It’s two hours and forty-three minutes long but (mostly) keeps moving, helped along by a lot of traditional Bond globetrotting, energetic and well-staged action scenes, Hans Zimmer’s score, good performances, and a (mostly) intriguing plot.

The film is brilliant on the technical side. The costumes and sets look great, especially the concrete bunkers where Safin and Bond face each other down at the end, and the special effects and stunt work are outstanding. All of the action scenes are good, not only enjoyable but exciting. I particularly liked a fight in a foggy Norwegian forest, in which Bond is forced to improvised in much the way he did at the end of Skyfall, and the climactic sequence, set in a repurposed Cold War-era missile silo, feels like a level from GoldenEye for N64 in the best possible way. I really thrilled as the film approached the conclusion.

No Time to Die is also beautiful to look at, as it was shot on 65mm film and mostly in the real places where the film takes place—Italy, London, Jamaica, Norway, and finally the Faroe Islands standing in for “disputed islands” between Japan and Russia. The cinematographer, Linus Sandgren, makes full use of the format for Lawrence of Arabia-scale landscapes and beauty shots. The lab heist at the beginning, taking place high on a London skyscraper at sunset, is stunning.

The cast perform well from top to bottom, though I felt Ralph Fiennes could have used more screentime and Christoph Waltz needed either more or none at all. Like some others, I was worried about the direction the film would take Nomi as a “new” 007, but I was quite pleasantly surprised. Nomi’s got a big attitude, but after some genuinely enjoyable rivalry in the early going she and Bond learn to respect and work with each other. They even have some of the film’s best banter, with Bond often getting the last word. It’s also in this snipping back-and-forth with Nomi that we get the clearest glimpses of the Bond from the early Craig movies rather than the sleepwalker from Spectre.

And while I’ve seen a few people criticize Rami Malek’s Safin as a “weak” Bond villain, I disagree. I found him a real threat, and his first appearance since the cold open, in which he is masked, is genuinely menacing. What I do wish is that he had more time in the film and that his motivations regarding his ultimate plans for Heracles—moving from wiping out certain bad people to spreading it worldwide—were clearer. Is this a eugenics project? Racially motivated genocide—as his pet scientist hints he could do if he wanted? Pure nihilism? I’ve seen the movie twice and I’m still not sure about this.

The film also features some nice nods to previous films and even Fleming’s original stories. The porcelain bulldog willed to Bond by Judi Dench’s M in Skyfall shows up, for example, and Safin’s garden of poisonous plants comes straight from the novel You Only Live Twice.

The bad

I’ve hinted at a few misgivings about the movie. I’m not completely sure the plot involving Blofeld leading SPECTRE from prison makes sense. I got more of it on a second viewing—for example, how one of Blofeld’s henchmen wound up working as one of Safin’s henchmen later in the film—but I think the plot moves quickly through this to conceal its basic implausibility.

Unusually for Bond, he is also prone to speechifying in this film. Two scenes stand out—one about halfway through in which he opens his heart to Madeleine, and one in which he berates Safin as an “angry little man” who is “playing God.” Bond can care deeply about women he loves and loathe a hubristic enemy, but this has seldom been something laid out for the audience in soliloquy. This is not a big problem, but an oddity of the writing and one that doesn’t jibe with what we know of Bond. (The movie overexplains things elsewhere, too, as when the scientist, having gone rogue, says to himself what is happening, or when Q’s computer announces “Blofeld’s bionic eyeball unlocked,” a system message that deserves to become immortal.)

No Time to Die is overlong, and it is overstuffed. Its fast and well-managed pacing doesn’t fully resolve either of these problems (I’ll have a lot more to say about that below). When I write that M or Safin could use more screentime, I’m not arguing for a longer movie but a re-proportioned one, one that trims away or removes some sidestories and subplots that, while contributing to the plot, also add to the sense of bloat and diffusion.

And I think I know why this is, but I can’t examine that without giving things away. So if you’ve seen the movie, feel free to continue into the spoiler territory below.

My biggest misgiving—spoilers ahead

I need to set this complicated but ultimately rather mild criticism up with two spoilers: Bond and Madeleine have a five-year old daughter, Mathilde; and, at the end, Bond, infected with Heracles nanobots that would target and kill Madeleine and Mathilde if he so much as touches them, dies stopping Safin’s plan.

I think No Time to Die missteps by bringing Madeleine back from Spectre—and in giving her and Bond a child. The film would be shorter, tighter, and—in Bond’s death—more powerful without Madeleine.

no-time-to-die-poster-james-bond.jpg

Some of my criticism is purely practical. With the exception of Sylvia Trench, who disappeared after the first two Connery films, Bond has never had a girlfriend carry over from one movie to another. It’s out of character.

Furthermore, Madeleine being the daughter of Mr White requires the audience to recall who Mr White is and some of what he’s been responsible for if they want to understand Safin’s motivation. This is a tall order for casual fans. (My wife, for instance, was mystified by the connection.) While there have always been some continuities in the Bond series, the films mostly stand alone—for a reason.

(Also: I’m not usually the type to make internet neckbeard arguments about believability, but I did have to question the wisdom of Her Majesty’s government in clearing Madeleine to meet regularly with Blofeld, given that she’s the daughter of a known terrorist financier with ties to two other terrorist leaders. There are surely other psychiatrists in London.)

It also seems like the screenwriters never quite decided what to do with Mathilde, Bond and Madeleine’s daughter. After Safin kidnaps her, he uses Mathilde as a powerful bargaining chip, manipulating both Madeleine and, in one of the film’s tensest scenes, Bond. But when Mathilde becomes even slightly inconvenient he abandons her. Thereafter, Mathilde becomes what TV Tropes calls The Load, a helpless human cargo for the hero to heft along while also fighting the bad guy.

And that, the conclusion, with Bond facing his death, is the root of my biggest misgiving. As Bond, wounded and bleeding out and also infected with Heracles, calls in a missile strike to obliterate Safin’s stores of Heracles but that will also surely kill him, too, he has a tearful conversation with Madeleine by radio. She confirms that Mathilde is his child. He tells her he loves her. He smiles. The missiles home in and Bond dies a glorious death.

And it doesn’t quite work—at least not for me.

What the movie gets right is that Bond can’t grow old and harmless, withering into senescence in retirement. If Bond is ever to die, it must be in the line of duty, because it has always been duty—Queen and Country—that drives him, and he has always endured, with a stiff upper lip and a wry comment, as a result. Men like Bond come and go but England is forever, even with the collapse and irrelevance of the Empire. A brief exchange between Safin and Bond nails this. Called “redundant” by Safin, Bond begs to differ: “Not as long as there are men like you around.”

What the movie gets wrong is its diffusion of our investment in Bond’s sacrifice. Are we pulling for Bond to succeed and weeping at his sacrifice because he’s defeating an enemy of Britain and saving the world? That works. Are we pulling for Bond to succeed and weeping at his sacrifice because he’ll save but never again see Madeleine and Mathilde? That also works. Either of these things could have been fine, but not both together. Ironically, by bringing Madeleine back and giving her and Bond a child, presumably to give Bond a more intimate, personal stake in the plot, the screenwriters actually lower the stakes. This might have worked, because Skyfall did it—and did it better, because there the personal and the patriotic were united in the figure of M. Here, the two halves of Bond’s motivation are separate and distract from each other.

Trimming these subplots, especially replacing Madeleine with another character with no tie to the events of previous films, could have untangled some of the middle of the film’s plot complications and, in the conclusion, allowed the audience to focus solely on Bond’s confrontation with Safin.

The result, ultimately, is a fast-paced but overcomplicated plot that also doesn’t quite work tonally or in terms of Bond’s character.

Conclusion

That’s a lot of explanation about what I think doesn’t quite gel in No Time to Die, but there I’m minutely examining the difference between a B+ and a B- movie. Most of the film works, and works well. I just think it could have been even better. Even if long, slightly overcomplicated, and taking one too many missteps in the conclusion, its fast pace, excellent action, and great performances will keep the Bond name alive for a long time to come. It’s a worthy conclusion for Daniel Craig’s run as Bond.

I’m going to miss Craig as 007. For me, No Time to Die ranks below, but not too far below, the two masterpieces from his tenure—Casino Royale and Skyfall. I don’t know what will come after Craig, but I’m hopeful, and I appreciate what he brought to the character for fifteen years.

Thank you, 007. That’ll be all.

Robert E Lee: A Life

Allen Guelzo’s new biography of Robert E Lee could not have been better timed. Begun eight years ago, Guelzo worked on it from the late years of the Obama administration, through the Trump years and the Charlottesville riot, finished it during the COVID-19 epidemic and the social upheavals of the summer of 2020, and it appeared in bookstores just a few weeks after Richmond’s monumental equestrian statue of Lee and Traveller came down. The time is ripe for a well-researched, well-argued, measured look at the real man behind the many propaganda versions of Robert E Lee.

And Guelzo’s Robert E Lee: A Life almost perfectly—almost—fits the bill.

The life

Even before examining Lee’s Virginian ancestry, the standard way to begin a life of Lee, Guelzo opens with an admission: his bias. Guelzo is “a Yankee from Yankeeland” and can only, in the end, regard Lee as a traitor. This would seem to close off certain sympathies or lines of questioning from the start, but Guelzo is intellectually honest enough to work through and against his bias most of the time (about which more below), and, frankly, I can get along with someone who is upfront about his bias. Charitable but not uncritical, this is a long way from the smears and hatchet jobs—here’s the worst, a mendacious piece from The Atlantic that just will not go away—that commonly circulate today.

With this off his chest, Guelzo begins with Lee’s father, “Light Horse Harry” Lee. Light Horse Harry was an eager young cavalryman during the Revolution who impressed no less than George Washington and Nathanael Greene with his vigor, but proved an utter failure in post-war political life—not to mention his private life. He made many political enemies and just as many disastrous investments. He twice married into wealth, and the second time it was only a legal provision by his in-laws prohibiting him from having access to his wife’s money that kept the family from going completely broke. By the time Robert was born, the eighth of nine children, the family was in dire financial trouble. When Robert was six, Light Horse Harry left for the Caribbean in a scheme to recoup his losses and return in triumph. Instead he fell ill and died in Georgia on the return trip, when Robert was still only eleven.

The character and death of Robert Lee’s father, a politically inept wastrel who more or less abandoned his family, is, for Guelzo, one of the seminal moments in Lee’s life. For the rest of his life, Guelzo convincingly argues, Lee was motivated to be the man his father—and several other older male relations—failed to be: to pursue independence, scrupulous financial solvency, and personal moral perfection. The negative example of his own father was ever before him. Guelzo notes that right up until the Civil War, Lee was often identified as the son of Light Horse Harry Lee. The constant reminder, he suggests, was Lee’s hairshirt.

Guelzo spends a good amount of time on Lee’s youth, education, and early military career. Following his graduation from West Point, Lee spent decades as an army engineer, working on east coast fortifications like Fort Pulaski and Fortress Monroe or western “improvements,” like a project to redirect the Mississippi in order to prevent the port of St Louis from silting up. A chapter on Lee’s performance in the Mexican War, in which he began as an engineer on the staff of General Winfield Scott but ended up as Scott’s favored reconnaissance officer and military protégé, is especially good, as Guelzo notes what Lee learned by example from the United States’s shameful perfidy toward Mexico and Scott’s high-minded and idealistically honorable conduct of the war.

These chapters, covering approximately the first two hundred pages, are well spent and give proper proportion to the Lee’s life before the Civil War. Throughout, Guelzo takes careful note of Lee’s uprightness and strength of character—already remarked upon in his teens and twenties—and his gravitation toward older male mentors, a series of army officers culminating in General Winfield Scott. He marks also Lee’s constant fretting about money despite being, by the standards of the time, apparently well situated; his marriage and family life (which can easily go missing in military biographies); and even the development of his religious beliefs, which began as what Guelzo characterizes as a noncommittal “genteel low church Episcopalianism” that gradually, especially during the war years, grew more open and more fervent.

Guelzo also carefully examines Lee’s political ideas—what there are of them—an inherited Federalist sentiment that evolved toward a preference for the anti-populist, anti-Jacksonian Whigs over the Democrats. But, most importantly, Guelzo notes Lee’s early apolitical stance, a stance maintained with greater and greater tenacity as political strife became more and more difficult to avoid.

Crucially, Guelzo brings the reader into the first great test of Lee’s resolve on all of his goals—personal independence, care for family, financial stability. This is the death of his father-in-law, George Washington Parke Custis, a gregarious dabbler and, as lord of Arlington plantation in Alexandria, one of the largest slaveowners in northern Virginia. When Custis died in 1857, he left behind a poorly managed and run-down estate and seemingly impossible provisions in his will—generous legacies for his grandchildren, the payment of his many creditors, and the manumission of the Custis slaves within five years—and named Lee as executor. Seeing the provisions of the will carried out to the letter consumed Lee’s life into the early years of the Civil War and still caused problems in the years afterward.

When the crisis of the Union comes, Guelzo gives a thorough and detailed examination of Lee’s competing instincts and loyalties—the will to avoid politics, his loathing of slavery and secessionism, his fear of the federal government’s use of force against other Americans, and his loyalty to family and, by extension, home state. He also lays out, before, during, and after the war, the legal difficulties involved in the Constitution’s ambiguous (and, I would argue, factitious) definition of treason. (A reviewer at National Review also notes that, at the time, the oath sworn by army officers was written with “United States” grammatically plural and was only changed because of the Civil War. Guelzo includes the text of the oath but does not draw attention to this.)

The chapters on the Civil War proper, what I imagine a lot of people will read the book for, are excellent. I may not agree with all of Guelzo’s perspectives on the fighting or the personalities involved, but this offers an engaging—even exciting and moving—and authoritative tour of the conflict through Lee’s involvement as commander of the Army of Northern Virginia. Throughout, Guelzo never loses sight of Lee the man, and keeps the reader abreast not only of the campaigns but of Lee’s personal life. Lee lost not only the property bequeathed to his children during the war, but a daughter and a grandson, and one of his sons was captured and threatened with hanging by his Union captors. He also had at least two heart attacks.

The final chapters follow Lee’s postwar years as president of Washington College in Lexington. Here Guelzo gives much more thorough coverage than is typical of Lee biographies, and this attention is welcome. Much more than a famous name or a figurehead, Lee oversaw a revival and expansion of the college that helped it survive the lean years following the war.

Perhaps the highest praise I can give the book is to note my hesitation to read the final few chapters. When one reads a biography one knows the end must come, but as I sensed its approach in the onward march of Reconstruction and Lee’s steeply declining health, I read with trepidation. Guelzo’s description of Lee’s death, coming at the end of this involving and intimately personal look at the man, moved me deeply.

The value

Guelzo, despite his openly stated bias in favor of a nationalist vision of a divinely ordained and indivisible Union—a bias and vision I don’t share—does a great deal to help Lee’s reputation in this hostile age. He brings an intellectual honesty to much of his account that pokes holes in a lot of simplistic versions of Lee and simply debunks others.

He demonstrates, for example, that Lee’s regard for slavery as an “evil” was not mere rhetoric. Did Lee benefit from slavery? Yes—who didn’t in that age? Did Lee have modern attitudes toward race? No—who did in that age? Guelzo does not conceal these facts, but he also points out where Lee was exceptional in this regard. The only slaves Lee ever personally owned were a single family inherited from his mother, a family whom he liberated when he didn’t have to. He worked especially hard to see to it that all of his father-in-laws’ slaves were freed by the stipulated deadline—again, as Guelzo points out, at great inconvenience to himself and despite the chaotic early years of the war, the occupation of Arlington by the Union army, and the ready availability of Confederate judges who could have voided that provision of the will. Guelzo also shows how, after the war, Lee used his position as president of Washington College to prevent racial violence in Lexington, handing out harsh penalties or outright dismissals from the college to students who assaulted or antagonized freedmen.

Militarily, Guelzo also has insightful critiques and reassessments of Lee’s capabilities as a general. While a commonly repeated consensus is that Lee was a master tactician but a poor strategist, Guelzo makes a very good case for the accuracy of Lee’s strategic vision, that Lee understood early that the war would have to be won quickly, and virtually within sight of Washington, DC, in order to prevent the triumph of the Union through sheer scale and manpower. This was a refreshing and interesting perspective.

Guelzo also, in the book’s complex and challenging epilogue, deflates the commonly repeated cliché that Confederate monuments were put up as subtle semiotic violence toward African Americans, using the Lee monument in Charlottesville as an example. By the time Guelzo reaches latter day conflicts, the comparison with the life so meticulously laid out in the preceding 400 pages renders the present appropriately small, tawdry, and depressing. Guelzo also pushes back against some of the more unfair misrepresentations of Lee’s character popularized by Thomas Connelly in his 1977 book The Marble Man.

Overall, the portrait Guelzo presents the reader is of Lee as a flawed but good and principled man burdened with impossible personal standards, a man characterized more often than not by frustration—with the army, with newspapers, with the US government, with the Confederate government, with the slaves of his father-in-law’s plantation, with Arlington itself, the apple of his homebody wife’s eye—and a man who, from his return from Mexico onward, was deeply unhappy. I would dispute some of this. Guelzo gives us some glimpses of the deep affection Lee felt for his children, his charm and gentility toward women generally, and his sense of humor and fun, but these dimensions of his personality are often missing.

This may not be a complete portrait of the man, but it is a good portrait.

Quibbles

Nevertheless, I have quibbles, mostly relating to places where Guelzo betrays his bias. I want to point out three, with examples, so pardon the length. If you’re not here for the long haul, you can jump straight to my conclusion below. Otherwise, look at these three narrative tics and notice how they overlap.

The first is simply stylistic, relating to how Guelzo incorporates quotations, especially from letters and other primary sources, and what in fiction are called “dialogue tags.” Here are a few samples:

But [a steamboat] could carry many times the commercial load of the rafts and barges, and unlike the flatboats and keelboats it could turn around and breast the Mississippi’s current for a trip upstream, making possible a complete circuit of the entire river highway between New Orleans and Pittsburgh. “What a prospect of commerce is held out to the immense regions of the west, by the use of these boats!” drooled Niles’ Weekly Register (67).

“The rumor which has reached me of this distressing event, I could not before credit, nor can I even now realize its truth,” Lee wailed in a letter to Gratiot on December 23 (74).

The first was President Polk’s political jealousy of General Zachary Taylor, whose modest victories were already “giving great uneasiness to the administration” and leading to discussions about a presidential bid by the old planter-general. “These officers are all Whigs and violent partisans,” Polk spluttered (89).

I don’t think there’s much in these quotations to warrant the verbs drool, wail, or splutter. These are the wildest examples, but throughout—and clustering noticeably in the two early chapters I pulled these from—Guelzo overdoes it with these tags. It’s distracting and sometimes comical, and while I don’t think Guelzo intended this, but they also carry a faint air of derision wherever they appear.

Similarly, Guelzo occasionally editorializes in the middle of his narrative, often with a “it never seems to have occurred to him,” a tic I noticed early on, as here:

“I seem to think that Said opportunity is to drop in my lap like a ripe pear,” he admitted. Nevertheless, he persisted in believing that it was “remarkable that a man of my Standing should not have been Sought after by all these Companies for internal improvement.” It seems never to have occurred to Lee to go looking for those companies and opportunities on his own, or that the coastal engineering projects that had consumed his career thus far were of little interest to the infinitely more lucrative inland projects of railroads, real estate, and bridges (73).

Even Rob, at ten years old, remembered that Lee made a fetish of being “punctual” and on Sunday mornings would “appear some minutes before the rest of us,” ready to proceed to the academy chapel, and “rallying my mother for being late, and for forgetting something at the last moment.” If [Mary Lee] strained his patience, “he was off and would march along to church himself, or with any of the children who were ready.” (It never occurred to her husband that Mary Lee’s slowness might be due to some other cause than forgetfulness.) (132)

“These people [the Union army] delight to destroy the weak, and those who can make no defence,” Lee fumed, as if the wounds of Arlington and White House had been reopened by the destruction of Fredericksburg; it never occurred to him that Fredericksburg’s enslaved population might look on the arrival of the Union Army in a very different light (274).

There were other issues, too, that fueled [Lee’s] bitterness over Union conduct, which seemed to diverge so wantonly and destructively from the scrupulous pattern Winfield Scott had set long ago in Mexico. (It never seems to have occurred to him that the barbarities of slavery were worth weighing in the balance, or that those barbarities and the men who excused them were precisely what he was, objectively, protecting.) (310)

Again there is that faint whiff of derision or scorn. Some of this Guelzo just can’t know (about which more below), and some of it represents an abandonment of his quest to understand the man. The latter two examples are especially frustrating, as Guelzo otherwise devotes so much time and effort to exploring and explaining Lee’s negative view of slavery, his rigorous soldierly avoidance of political questions, and his scrupulous attempts to uphold honorable and civilized standards of warfare. But here, instead of trying to integrate all of this understanding and interpret these events or passages in light of it, he simply gives up in favor of hectoring Lee for failing to be Allen Guelzo.

Finally, there’s not much of it, but there is, crucially, more psychologizing than I prefer in a historical work. Two examples:

Mary herself maintained an informal Sunday school at Arlington for slave children, teaching them (in quiet violation of Virginia law but mostly to satisfy her own sense of a white woman’s obligation to lesser beings) to read in “a little school house” in the woods (145).

Again, how can Guelzo possibly know this? Is obligation utterly incompatible with a sincere desire to help? Is this not a species of what modern people rather cheaply call “giving back”? And this comes in the same paragraph as a description of Lee purchasing Mary a lifetime membership in the American Colonization Society and paying the way for manumitted slaves to migrate to Liberia at his own expense.

Then there’s Guelzo’s handling of the notorious Norris incident, in which three of Mary’s father’s slaves, including a man named Wesley Norris, ran away from Arlington, were returned, and whipped. A lurid denunciation of Lee, as the executor of his late father-in-law’s will, appeared shortly afterward in Horace Greeley’s anti-slavery New-York Tribune and, after the war, when Radical Republicans were looking for excuses to prosecute or hang ex-Confederates, a yet more elaborate version appeared in an explicitly abolitionist newspaper. My point here is that our sources for these details had plenty of motivation to exaggerate or fabricate. (Arlington’s official account of this incident, appropriately cautious, hedges its bets, concluding on a note invoking what one might call “emotional truth.”)

While Guelzo accepts more of the story than I tend to (I’m disinclined to think it’s a complete fabrication, but that’s a debate about historiography and sources for another time), he follows his account with this:

A week later [after the appearance of the Tribune story], he wrote to Custis Lee . . . wondering whether “you have been told that George Wesly and Mary Norris absconded some months ago, were captured in Maryland, making their way to Pennsylvania, brought back, and are now hired out in lower Virginia.” He said nothing about the whipping, except to acknowledge that “the N.Y. Tribune has attacked me for my treatment of your grandfather’s slaves.” He added, cryptically, “I shall not reply.”

But he could not bring himself actually to deny that he had done what the Tribune described, and it is difficult to avoid the conclusion that when his fury had cooled, he was sickened at himself, as much for the damage done to his own self-image as for the cruelty inflicted on the three fugitives. In that moment, he had reverted to Light Horse Harry, spiking a deserter’s head on a pole (157-8).

This is a lot to read into Lee’s refusal to write a rebuttal to a hostile newspaper, and even gets into the cheap father complex stuff that mars a more popular-level biography like Roy Blount Jr’s. This bit of psychological speculation—because it is, after all, only speculation—is especially egregious since Guelzo notes many other instances in which Lee, protective of his and his family’s honor and cognizant of how degrading political debate and newspaper mudslinging could be, refused to descend to the level of his critics—both north and South, and before, during, and after the Civil War. Even today Southerners of a certain background are raised neither to answer nor even acknowledge unfair criticism, an ideal that, as in Lee’s lifetime, is becoming harder and harder to live out. In cases like these it appears—it never occurs to him?—that Guelzo is simply unwilling to or incapable of fully understanding his subjects or allowing their actions to speak for themselves.

Or perhaps it’s simply his bias again. Just a few too many times Guelzo gives in to an impulse to hold Lee at arm’s length and superficially critique him, forgetting for a moment some of his own carefully researched insight into the man. Though he capably unpacks many of the factors that made Lee into the man he was, most especially the lifelong negative example of Lee’s own father, Guelzo never entirely overcomes cultural blindspots like this, and his picture of Lee, though strong, deeply researched, and mostly fair, never completely coheres.

Imperfections—precisely what Lee dreaded.

Nevertheless

If I have dwelt at length on these flaws—which are more nagging interpretive tics suggestive of an underlying unwillingness to comprehend than narrative-wrecking errors—it’s because Guelzo’s book is otherwise so good. Guelzo’s standards of research are extraordinary, his coverage is meticulous, his account is fair and openminded toward Lee most of the time, and his writing is excellent. I finished the 430+ page body of the work in just over a week, and not only because of my interest in the subject.

And most importantly, Guelzo’s commitment to research and to finding the human being underneath the partisan versions of Lee helps puncture a number of useful misconceptions—or outright fictions. And this in spite of a bias that can lead Guelzo to clearly lay out Lee’s understanding of his own situation, the ambiguity and uncertainty of his position, and his reasons for resigning from the US Army and going with Virginia—which had not yet seceded at the time of his resignation—and still write in the book’s conclusion what amounts to a “Meh—treason.” Again, that clearly stated bias. Nevertheless, this is the fairest shake I expect Lee to get anytime soon.

But in a way, despite the nagging issues I’ve examined, Guelzo’s bias—honestly admitted from the get-go—may prove to be a strength. This book, coming from this author with this perspective but still striving both to understand and to make Lee comprehensible, may get a hearing older biographies by previous biographers would not. And an attempt at an honest account, one that seeks however imperfectly to explain Lee on his own terms, is welcome in this day and age.

My recurring thought over the history, monument, and naming debates of the last several years has been Go read a book. For those honest and openminded enough to do so, Guelzo’s Robert E Lee: A Life may be the right book at the right time.

More if you’re interested

In the epilogue, Guelzo graciously refers to Emory Thomas’s biography of Lee as “the best and most balanced of any single-volume Lee biography,” and I think I agree. I recommend it. I also recommend William C Davis’s Crucible of Command, which is a dual biography of Lee and his opposite number, Grant. It’s one of the best books I’ve read on either man or the Civil War generally. Guelzo is mildly dismissive of Douglas Southall Freeman, author of the Pulitzer Prize-winning four-volume RE Lee, but Freeman’s research is still unparalleled and his perspective less “worshipful” than it is often accused of. (Ironically, Freeman was praised at the time for presenting a realistic and human portrait of a man long “viewed by [other] biographers through the rose-tinted glasses of romance.”) The one-volume abridgement Lee is still worth reading.

If you’re pressed for time, some good essays (a couple of which I recommended before, in a post for the sesquicentennial of Lee’s death a year ago today):

And several of y’all sent me a link to theologian Kevin DeYoung’s interview with Guelzo, which is a good one-hour introduction to the topic with, again, Guelzo being quite upfront about his biases.

Chesterton on Poe

I’ve been thinking a lot about Edgar Allan Poe for, oh, the last twenty-seven years or so, and so I was struck yesterday to catch the following in the first chapter of GK Chesterton’s Orthodoxy, a book I’ve read many times:

Poets do not go mad; but chess-players do. Mathematicians go mad, and cashiers; but creative artists very seldom. I am not, as will be seen, in any sense attacking logic: I only say that this danger does lie in logic, not in imagination. Artistic paternity is as wholesome as physical paternity. Moreover, it is worthy of remark that when a poet really was morbid it was commonly because he had some weak spot of rationality on his brain. Poe, for instance, really was morbid; not because he was poetical, but because he was specially analytical.

For comparison, here’s a really interesting review of The Reason for the Darkness of the Night, by John Tresch, a new biography that approaches Poe—who had spent time in the army at Fort Moultrie and a very short stint at West Point, which was then primarily an engineering school, and who went on to influence both science fiction and “ratiocination”-based detective stories as genres—as a frustrated scientist. Here’s the reviewer on Poe’s Eureka: A Prose Poem, published a year before his untimely death. I quote at length to make a point:

As the reviews–which were overwhelmingly savage–pointed out, it contains no actual scientific research: Poe reaches his conclusions by ‘ratiocination’, the method also favoured by his fictional detective Auguste Dupin. In its grandiose and disorientating shifts of perspective, it bears a closer resemblance to the diorama shows of New York’s entertainment palaces than it does to a scientific treatise.

Yet Tresch finds method in the madness of Eureka. Poe conceives the universe in terms of an eternal flux between the forces of attraction and repulsion. Matter and soul, time and space are all manifestations of the same essence. Attraction is the force that manifests in matter and gravity, while repulsion imbues electricity, life and spirit. The universe began in a spasm of repulsion, diffusing outwards to create multiplicity out of unity, before the forces of attraction–as described by Newton’s theory of gravity and Laplace’s nebular hypothesis–drew it back into clumps of matter. The process cycles constantly through microcosm and macrocosm, at every scale from the microbial to the galactic. It is the breath of life, the heartbeat of the universe, and will continue until a final collapse in which all life and consciousness will merge into the unity from which they arose.

Solipsistic and extravagantly overreaching, Eureka was nonetheless a sincere attempt to formulate a holistic science in opposition to the narrow specialisms that Poe saw hardening around him.

“Morbid . . . not because he was poetical, but because he was specially analytical” is tough but fair. It also fairly neatly illustrates a passage from later in the same paragraph of Orthodoxy, in which Chesterton writes that

To accept everything is an exercise, to understand everything a strain. The poet only desires exaltation and expansion, a world to stretch himself in. The poet only asks to get his head into the heavens. It is the logician who seeks to get the heavens into his head. And it is his head that splits.

This aside about Poe got me poking around looking for other places where Chesterton had written about him. While I found several offhand remarks and allusions, the longest reference, one that indicates intimate familiarity with Poe’s work, comes in chapter two of Chesterton’s literary study Robert Louis Stevenson. Here Chesterton contends with the basic artistic sensibilities of Stevenson and Poe, who were often at that time—Chesterton thinks unjustly—compared:

Now in the same manner there is at the back of every artist's mind something like a pattern or a type of architecture. The original quality in any man of imagination is imagery. It is a thing like the landscapes of his dreams; the sort of world he would wish to make or in which he would wish to wander; the strange flora and fauna of his own secret planet; the sort of thing that he likes to think about. This general atmosphere, and pattern or structure of growth, governs all his creations however varied; and because he can in this sense create a world, he is in this sense a creator; the image of God. Now everybody knows what was in this sense the atmosphere and architecture of Poe. Dark wine, dying lamps, drugging odours, a sense of being stifled in curtains of black velvet, a substance which is at once utterly black and unfathomably soft, all carried with them a sense of indefinite and infinite decay. The word infinite is not itself used indefinitely. The point of Poe is that we feel that everything is decaying, including ourselves; faces are already growing featureless like those of lepers; roof-trees are rotting from root to roof; one great grey fungus as vast as a forest is sucking up life rather than giving it forth; mirrored in stagnant pools like lakes of poison which yet fade without line or frontier into the swamp. The stars are not clean in his sight; but are rather more worlds made for worms. And this corruption is increased, by an intense imaginative genius, with the addition of a satin surface of luxury and even a terrible sort of comfort. “Purple cushions that the lamplight gloated o’er” is in the spirit of his brother Baudelaire who wrote of divans profonds commes les tombeaux [divans as deep as tombs]. This dark luxury has something almost liquid about it. Its laxity seems to be betraying more vividly how all these things are being sucked away from us, down a slow whirlpool more like a moving swamp. That is the atmosphere of Edgar Allan Poe; a sort of rich rottenness of decomposition, with something thick and narcotic in the very air. It is idle to describe what so darkly and magnificently describes itself.

When I say that Chesterton finds the comparison of Stevenson to Poe unjust, I think he finds it unjust to both men. Because while this is a vivid, meticulous, and incisive riff on “the atmosphere and architecture” of Poe’s stories and verse, I don’t read it as necessarily condemnatory. Chesterton’s whole point is the simple unlikeness of Stevenson and Poe. Where Poe is morbid, he is nevertheless good at what he does. And some of us certainly respond readily to the “dark wine, dying lamps,” and “dark luxury” of Poe’s vision.

Elsewhere in my cursory search I ran across the following from Jorge Luis Borges, from “On Chesterton” in an essay collection called Other Inquisitions, 1937-1952. After describing several fantastical scenarios from Chesterton stories, Borges writes:

These examples, which could easily be multiplied, prove that Chesterton restrained himself from being Edgar Allan Poe or Franz Kafka, but something in the makeup of his personality leaned toward the nightmarish, something secret, and blind, and central.

This consonance of Chesterton’s and Poe’s imaginations at one point had never occurred to me. Both have something high-flown, florid, and larger than life in the wild and fevered incidents of their stories. Compare the violence in any two of their stories, especially in the way the violence is conveyed by the operatic horror of observers (or narrators); and look at the frantic tone of headlong pursuit—or is it flight? who can tell?—in The Man Who Was Thursday with any number of Poe tales. And keep in mind that The Man Who Was Thursday’s subtitle is A Nightmare.

Food for thought. There’s a lot there, and I’m surprised it took me this long to connect two of my favorite writers in this way. I hope to write more on this later.

In the meantime, here’s a good short TED-Ed video on Poe.