Lying and counting the inexplicable

The Messenger, Luc Besson’s brutal, ugly, inaccurate, and very very late 90s film about St Joan of Arc, is a terrible movie, but it has one brilliant scene that I’ve reflected on since the one time I watched it more than twenty years ago.

Late in the story, as Joan sits in prison awaiting trial and sentencing, she is visited by a character played by Dustin Hoffman called “The Conscience.” The Conscience has a literally satanic role as an accuser, introducing doubt where Joan has heretofore felt only conviction. His interrogation eventually centers on Joan’s sword, which she miraculously found in a field, an event she took as a calling from God. The Conscience seizes on this, pointing out that it is not self-evidently a sign, but simply a sword in a field. In an increasingly rapid montage, the Conscience suggests many possible ways the sword could have wound up there that did not require God placing it here for her to find.

Having run through several scenarios in which the sword is dropped during combat or simply lost by accident, the Conscience says, “And that’s without counting the inexplicable.” Whereupon we see a man trudging through the same field carrying the sword, which he throws, entirely unprompted, into the tall grass. He doesn’t even stop walking.

The scene is clearly meant to mock supernatural belief—and it doesn’t even get St Joan’s history with that sword right—but that penultimate image of “the inexplicable” makes a valid, important point.

A young true crime YouTuber got me thinking about the Conscience and the inexplicable again. In my constant search for another Lemmino, I’ve tried out a lot of documentary channels on YouTube. Sturgeon’s Law being what it is, most of them aren’t very good. But in the course of finding a handful of decent documentary YouTubers to listen to or watch as I do the dishes or make the kids’ lunches, I’ve noticed that even the best of them have a persistent flaw.

The YouTuber in question is a college graduate with a degree in history. He’s smart, funny, and clearly paid attention in his historiography classes, as he demonstrates a good historian’s grasp of how to gather and assess evidence—most of the time. Faced with contradictory or irreconcilable details in whatever evidence he’s gathered (usually on old missing persons cases), he is far too willing to declare that someone is lying. Not mistaken, ignorant, misremembering, or misinformed—lying.

Part of this may be generational and cultural. I’m a geriatric Millennial from the Deep South, where accusing someone of lying is still serious business, and he’s a northern Zoomer. But it’s also a historiographic problem.

The accusation usually stems from discrepancies in whatever evidence is available—note that—and unacknowledged subjective impressions of the people involved. Discrepancies, in true crime theorizing, offer the same incentive that “anomalies” do to the conspiracy theorist. Our YouTuber falls into this trap whenever he takes discrepancies as evidence of willful deceit.

A lightly fictionalized version of a real example:

Two tourists disappear while hiking in Central America. Their diaries, when found, include a final entry on Monday, April 20. Locals confirm this date. But another tourist who briefly got to know them before their disappearance later recalls seeing the two tourists on Tuesday the 21st. But when first asked about the missing tourists by the police, the records show she stated this happened on that Monday. Why did she change her story? Why is she lying?

The most likely answer is that she didn’t, and she’s not.

Imagine meeting two strangers in a foreign country. You see them again sometime later. Being recent acquaintances, you notice them, but you’re busy with your own business. When they disappear, it turns out you’re one of the last people known to have seen them alive. Suddenly, details of that day take on a significance you never could have anticipated, you’re forced to try to recall things you never knew you would need to remember, and you may not have learned about the disappearance for days or weeks after it happened.

You’re interviewed by local police and by investigators from the tourists’ home country. You return to your own country and your previous life, and years go by. The investigation is reopened several times and you are interviewed again at some point in the process. How well will you remember these things this time? How well did you remember them in the first place?

No one in this scenario is lying, covering things up, or changing their stories. People make mistakes, misremember things, have their memories tainted by bad information relayed from someone else, or simply don’t know. None of this is “lying.”

This is where the inexplicable comes in. Without even factoring in these faults and flaws of memory, how well can you account for your own behavior, even in ordinary circumstances? People do things they can’t even always explain to themselves—out of habit, or intuition, or boredom, or a myriad of other barely conscious non-reasons. To paraphrase a meme, you do just do things.

But imagine a single day of yours is, for reasons beyond your control, placed under a microscope, with the authorities—and YouTubers and podcasters and a legion of other amateurs—poring over your every movement. Who wouldn’t end up looking a little suspicious, especially after being run through a strong rinse of insinuation, as the true crime and conspiracy folks are wont to do? Who couldn’t be accused of lying when forced to remember details you may have forgotten or simply can’t explain?

A few good rules of thumb for sifting evidence:

  • Always assume there is information you don’t have, especially when dealing with incomplete evidence. You can only work from what’s available. It’s irresponsible in the extreme to speculate on casefiles that are still partially classified, but guess what you’ll see on almost every true crime YouTube channel?

  • Even if you have a complete set of the available evidence, remember that the evidence is not a complete account of reality. Every piece has its own built in biases—limitations in the kind of evidence it’s designed to gather—and will leave things out.

  • Always assume there are things you don’t understand. This is especially important in highly technical cases like the radar, transponder, and cell tower evidence in the Malaysian Airlines Flight 370 case which—guess what?—our YouTuber, who doesn’t understand a lot about aviation, takes as evidence of the authorities lying. Aviation is a good example because it’s so obviously complex, but there are hidden technical pitfalls everywhere. In our lightly fictionalized example, consider the possible role of customs and immigration law in our tourists’ story, or unspoken local custom, or simple slang. These invisible technicalities can be the most dangerous. Just keeping Old and New Style dates straight in modern history can wreck your study of a specific event.

  • Don’t let your prejudices influence your interpretation. This should be obvious, but how many of us consistently meet this standard? Our YouTuber hates the Royal Canadian Mounted Police and the FBI, so guess how evenhandedly he approaches reports, statements, or other evidence from these institutions? The FBI’s handling of Waco doesn’t mean they’re lying about a child who went missing in the Great Smoky Mountains.

  • Always leave room for the inexplicable. Compare the Umbrella Man. And even if you carefully work through every alternative and can prove someone is lying, as the aforementioned Lemmino points out, you may never determine why they are.

  • Above all, remember historian’s bias. Approaching any event in the past will give us a different perspective and sense of its significance from everyone who actually lived through it. This is especially important to remember for people called to give an account of something that wasn’t significant to them at the time, that might, in the moment, to have been able to turn out some other way.

I could go on, but these are handy and important and should remain at the forefront of your mind when doing research. And if these are still not enough to dissuade you from leaping to the conclusion that someone you don’t know, under circumstances you haven’t lived through, that you don’t and can’t ever have a complete picture of, is lying, at least have a high enough regard for truth that accusing someone of falsehood becomes a charge you hesitate to make. Your conscience, at least, should demand as much.

Tron: Ares

Scientists at Disney generate a sequel to Tron and Tron: Legacy

I was one of the handful of people who saw Tron: Ares in theatres last fall. I love and enjoy Tron: Legacy beyond its merits and have shared it with my kids, who revere it, and if Tron: Ares had turned out to be good I planned to take them. I never did—not because it wasn’t good but because it was neither good nor bad enough for me to make up my mind about. I decided to give it another look at home when it came out on Blu-ray. That finally happened this month.

The plot, in brief: Tron: Legacy ended with the escape of a purely digital person into flesh-and-blood reality, and the new film’s very loose connection to that one is in the vast potential latent in the ability to transfer digital assets to reality. Kevin and Sam Flynn’s old company Encom is trying to develop this power to solve all the problems in the world. Old Encom rival Dillinger Systems wants to 3D-print weapons, vehicles, and expendable soldiers to sell to the military. Both are headed by Wunderkind CEOS: Encom by Eve Kim, who struggles to keep her idealistic sister’s dream of ending scarcity alive, and Dillinger by the ruthless Julian Dillinger, under the watchful but impotent eye of his mother Elisabeth.

Into this computer arms race steps the Ares of the title. Ares is a combat program created by Dillinger and trained on countless cycles of simulated combat, death, and regeneration. Dillinger shows him off to investors as the crowning achievement of his project. The problem is that Ares—and everything else generated from the system—only lasts twenty-nine minutes in the real world before disintegrating. This fact drives both Kim and Dillinger’s pursuit of “the permanence code.”

Through a little friendly corporate espionage, including the use of Ares to penetrate and exploit Encom’s servers in search of the code, Dillinger learns that Kim may have recovered it from old files hidden away by her sister. From this point forward it’s a race for Kim to bring the code safely back to Encom, for Dillinger to stop her and take it—through increasingly desperate means—and for Ares, who has begun questioning his programming, to decide what action to take.

Tron: Ares has a number of weaknesses, the chief of which is that the villain is much, much more interesting than either of the heroes. Eve Kim and friends are annoying do-gooders whom the screenwriters have worked too hard to make plucky and likeable, and Ares, as played by Jared Leto, is too convincingly robotic. Evan Peters’s Julian Dillinger, on the other hand, shows cunning and intelligence from his first scene and an amoral pragmatism barely restrained by the influence of his mother, played with chilly and ambiguous control by Gillian Anderson. The moment Julian has an opportunity to take decisive but irreversible action against his greatest rival, he struggles, but only so much. His lifetime of seizing every opportunity that will benefit himself has led to this, and even though he knows it’s wrong and we know that he’ll choose it, we see and feel the weight of the temptation crush him. Peters is likely the best thing in the movie.

This imbalance affects the entire film. It may be a cliche to point out how bad Jared Leto is since everyone online has been dogpiling him for months, but some cliches become cliches because they’re true. (My kids also insist I point out that he has weird hair. In a more artistic vein, my daughter noted that Ares, as a character, is more interesting in the first few minutes when he wears a mask. The moment Jared Leto’s vapid face is revealed, the mystery dissipates. A sharp observation, I’m proud to say.)

That said, the plot, which is simple but effective despite the banality of the movie’s heroes and escalates nicely heading into the final act, the production design and look of the film, the music, the special effects, and the action scenes make up for a lot. Despite the complexity of some of what the movie is offering, it’s intuitively presented—my kids had no trouble following it. I’ve seen director Joachim Rønning take some flak for Tron: Ares as an unimaginative hired gun, but I think the visual storytelling and style of the film serve the story well. I don’t find Nine Inch Nails’ electronic score as enjoyable by itself as I still do Daft Punk’s incredible Tron: Legacy score, but it works well within the movie.

No one should go into a Tron movie looking for deep ideas. As much as I love Tron: Legacy, its Kevin Flynn is given to some silly opining about how much his video game world will challenge the foundational thought of all of civilization. Spoken like a true techbro. Kim and Dillinger, at least, are less prone to philosophizing. (There is an irony in how this movie asks us to root for the good AI overlords against the bad ones; I found myself wishing both could fail. A touch of tonedeafness on the part of the producers.)

But Tron: Legacy and now Tron: Ares do deliver some great action. My kids found the buildup to the climactic sequence, in which Dillinger, having lost control of his own programs, sees his facility print and dispatch lethal weapons tech into the city in pursuit of Kim and Ares, unbearably suspenseful. It’s well-set up and well-executed, and the Terminator-like indestructabilty of Dillinger’s chief henchman posed an intense added threat.

Tron: Ares does not measure up to Tron: Legacy, but it tries to develop one small element of the latter in interesting ways and has satisfying, enjoyable Tron-flavored action. One can’t help but wonder how much better it might have been with a few tweaks, including someone in the title role with more visible depth than Jared Leto (which wouldn’t have happened, as he produced the movie). Having waited several months to rewatch it with my kids, I found myself liking it much more the second time around, not least since they responded so strongly to it.

Impressing kids is not everything, but it’s not nothing, and—following on from The Fantastic Four: First Steps—I’m pleased to have shared it with them. If there are more flawed but enjoyable and workmanlike adventures out there, we’ll take them.

The Fantastic Four: First Steps

Pedro Pascal and Vanessa Kirby as Reed Richards and Sue Storm in The Fantastic Four: First Steps

I recently watched The Fantastic Four: First Steps with the kids. It was okay—enjoyable without thrills, funny without big laughs, suspenseful without surprises. But it was also inoffensive, had a creative retro-futuristic look that took me back to The Incredibles, and had one compelling subplot that held the entire movie together and made it just a bit more than the sum of its parts. This won’t be a proper review of the entire movie, but a recommendation on the basis of its straight-down-the-middle quality and this one surprising aspect of the story.

The movie begins with husband and wife Reed Richards/Mr Fantastic and Sue Storm/The Invisible Woman discovering that, after two years of trying without success, they are finally expecting a baby. This might seem an odd place for a superhero movie to start, but the pregnancy and baby subplot—which I heard a lot about when the movie came out—turns out to be central to the story. The film’s villain, Galactus, who means to devour the Earth, offers to spare the planet in exchange for the Richards’s unborn child. They refuse. The public turns on the Fantastic Four.

This was a refreshing surprise for two reasons:

First, the baby, even before birth, is presented unquestioningly as living and important. The most moving scene in the film comes when Reed wants to scan the baby in utero and Sue, in an attempt to show that his science is distracting him from the truth of the situation, uses her powers of invisibility to reveal their son in her stomach. He squirms, kicks, and responds to them—all stuff I’ve seen on ultrasound monitors many times, that my wife has felt many more. In a culture that persists in dehumanizing the unborn—for pernicious, devouring reasons of its own—this lingering meditation on their life and value stunned me.

Second, the film explicitly positions the Richards’s refusal to give up their baby against a utilitarian, consequentialist ethic. Saying no to Galactus means he will eat the Earth. The fickle public, who adore the Four one moment and revile them the next, want to know why the fate of one baby should doom the entire planet. This is the Caiaphas argument: it is more expedient for one to die than the whole nation.

Reed and Sue steadfastly refuse to give in. It is wrong for parents to sacrifice the life of the child gifted to them. They won’t give up on saving the world, but that route—the path of least resistance, of giving in to the pressure of numbers and a short-term vision of salvation—is closed to them. I can’t think of the last time a film made such a deontological move, presenting something as morally wrong under any circumstances. Their refusal in the face of public pressure and the threat of Galactus makes them more heroic.

The latter aspect of the film not only drives the events of the climax, it reinforces the message of the former. If Sue and Reed, in their joy at the news, their preparation for the baby’s arrival, and their refusal to give him up show that life is too precious to bargain, the climactic action, in which all four demonstrate their willingness to die for the innocent, shows us that they mean it. Life is valuable. How valuable? This valuable!

Again, The Fantastic Four: First Steps is not an earth-shattering movie. It’s enjoyable entertainment with a unique aesthetic and more thought put into it than the last several Marvel movies combined—a low bar. What sets it apart is its wholehearted commitment to a vision of the value of human life—even in the womb—and its courage in allowing the characters to live that out without compromise. This was a great surprise, and I hope we can see more like this.

I shall not reply

In the summer of 1859, the New-York Tribune accused Robert E Lee of having three of his late father-in-law’s slaves, who had run away about a month before, caught and whipped, with Lee personally whipping a woman when the man administering the beating refused to. Horace Greeley’s Tribune was an anti-slavery paper and the accusation was made in an anonymous letter by a writer clearly unfamiliar with the provisions of Lee’s father-in-law’s will—of which Lee was the executor—and ended with a pointed political message. It was propaganda calculated to invite outrage—and provoke a response.

Lee’s only statement on the matter came in a letter to one of his sons: “The N.Y. Tribune has attacked me for my treatment of your grandfather’s slaves, but I shall not reply.”

One of the most annoying and unseemly aspects of online and social media culture is the endless calling-out of haters. Public figures of whatever level of fame, influence, and authority inevitably end up spotlighting and condemning their critics, which prompts fans to voice their support and dog-pile the enemy.

I’ve unfollowed a number of writers and thinkers I otherwise like precisely because of this. One popular evangelical literary scholar eventually made her presence on Instagram entirely about screenshotting hate mail and sharing it with a dismissive, above-it-all caption. An up-and-coming novelist on Substack has recently lashed out at a few people poking fun at her pretentions in a long essay describing them as anti-intellectuals and misogynists. I could multiply examples. The comments on these posts are always full of praise and affirmation, which is surely part of the point. It betrays a neediness and fragility I find not merely off-putting but embarrassing.

The technology doesn’t help, of course. The perverse incentives of social media demand response, immediately, and the knowledge that the fans will have your back against the haters only intensifies the pull toward the reply button. A mob can make anything feel righteous. Then follows the well-known dopamine rush of the zinger. And once the habit is formed, there’s no going back. You’ve fed the trolls. You’re ensnared, no better than the haters, slinging mud in the notes or reels or comments and basking in the praise of your yes-men. It’s this scene from “Community” all day, every day.

What I would like to see much, much more of is detachment. I shall not reply. Rather than acting like you’re above it all, rather than saying the criticism doesn’t matter, why not be above it all by ignoring it, not even mentioning it? Answer not a fool. That might mean letting the opinions of idiots stand but it wouldn’t degrade your own character. But as was clear even 2,000 years ago, most people would rather seem than be.

Lee understood this even in the newspaper era. There is some criticism not worth responding to, to which responding would only validate and encourage your critics by lowering yourself to their level. What must it have taken a man like him, of his background and character, facing such an accusation in such a difficult personal situation, not to reply? Discipline, for one thing, which the technology actively works to erode. He had avoided entanglement in journalistic controversies before and that habit didn’t fail him now. I doubt many of us could have made the same choice in 1859. I know even fewer could now.

The writing rule everyone misses

A recently popular genre of Substack note—judging by what the algorithm sends my way, anyway—is complaining by writers about “rules” for writing. These frequently take the form of fulminations against old advice to avoid adverbs. To paraphrase one note, which if I remember correctly was originally much ruder, “Every adverb I write is a little screw you to Stephen King.” More broadly, some will argue that the there are no rules for good writing and even to formulate rules is a kind of tyranny or imposition or—for a special subset of writers who self-consciously posture as independent outsiders—the mark of the dreaded “MFA writing.

I can’t speak for every writer who has ever laid out a list of rules for their own writing, but these Substack warriors could save themselves a lot of time and lower their blood pressure by noticing one all-important caveat or disclaimer in every good list of rules I’ve ever seen: break the rules if breaking them will produce better writing.

In the early days of the blog I collected three sets of writing rules from three favorite writers: CS Lewis, George Orwell, and Elmore Leonard. They have areas of broad overlap, especially a concern with precision and clarity, but here’s perhaps the most important:

  • Orwell: “Break any of these rules sooner than say anything outright barbarous.”

  • Leonard: “If you have a facility for language and imagery and the sound of your voice pleases you, invisibility is not what you are after, and you can skip the rules.”

Orwell’s rule comes at the end of his list; Leonard’s before he even lays his out. They’re emphasizing that these are their rules for how they write, a set of strictures that they have found effective, but space must remain for artistic judgment. This rubbishes another species of Substack complaint about rules, one often leveled at Orwell specific: that of hypocrisy.

Most of the complaints I’ve seen about rules for writing stem from a misapplication of or unwarranted rigidity in a using particular set of rules. This is a legitimate problem. The rule against adverbs exists not because adverbs are inherently bad, but because they become a crutch for weak writers. Most of the items prohibited by lists of rules have this temptation about them: overuse of adverbs can cause verbs, where the action happens, to atrophy; overspecific dialogue tags can, in addition to reading clumsily, bear more of a burden of information than the dialogue itself; passive voice can become an unthinking habit until the involuted relation of subject and verb in repeated passive sentences kills the pace of writing. Noticing and controlling these effects is necessary for strong prose; never, ever using an adverb or passive voice is something a high school English teacher might enforce (indeed, I know specific examples), but is an overreaction.

Again—the authors who lay out these rules usually say exactly that. These rules aren’t hard and fast. These rules aren’t universal. Sometimes you must break them. After all, they’re my rules.

One would think that would settle it, but some of these complaints also seem born of willful misunderstanding or mere resentment. This isn’t limited to Substack writers: I was surprised several years ago to see Ursula Le Guin taking an obvious potshot at Leonard in her book on writing. She took much the same tack, talking about his rules as if they are obviously phony and accusing him of hypocrisy. She was wrong, and the complainers are wrong.

Most crucially, rejecting or refusing even to consider rules and constraints will warp a writer’s artistic judgment. Any attempt to be bold or daring must begin at a baseline, because without that one cannot make judgments about what does and doesn’t work, and a writer who never works within constraints will never grow. Writing without rules is no more feasible than living without them.

* * * * *

Addendum: Even Strunk and White, who are the object of a Two Minutes Hate that comes in almost predictable cycles, were trying to train the sensibilities of beginners, not lay down eternal laws of good writing. One can write well while ignoring their advice, but not until it’s become a conscious decision, not a habit one slips into.

No aristocracy worth its salt

This week Before They Were Live dropped a new episode on Moana 2, which I haven’t seen, but Michial and Josh’s discussion of the film’s manifold weaknesses got me thinking about one of the biggest flaws in Frozen.

A few years ago I ranted about the dam in Frozen II—a badly imagined piece of infrastructure that has no use beyond serving as a cack-handed metaphor for the film’s political message. But that dam is not the first useless thing affecting the plot of a Frozen movie. I want to look at the first film’s villain, Prince Hans, and more specifically Arendelle’s useless aristocracy.

Here’s the rub: Prince Hans arrives early in the film and he and Anna, Queen Elsa’s younger sister, fall instantly in love. He swans around in a secondary role for a while until the climactic twist: Hans does not love Anna and, as the youngest son of another kingdom’s dynasty, as deliberately insinuated himself into Arendelle’s royal family to await an opportunity to take over. With Elsa feared and effectively outlawed and Anna mortally wounded by Elsa’s ice powers, Hans refuses Anna the kiss that will save her life, tells the handful of nobles hanging around the court that she’s dead, seizes control of Arendelle, and leads the attempt to eliminate Elsa. Boo, hiss.

I’m heartened to learn that I’m not the first person to criticize Hans as a villain. Others have pointed out the thin to nonexistent foreshadowing of his ulterior motives and the fact that his actions earlier in the film are counterproductive to his plot. (He’s also, in keeping with the political valence of the dam in Frozen II, more of a feminist device than a character, but more on that later.) These are legitimate complaints but not my chief problem with him.

The biggest problem with Hans, his plot, and Frozen’s climax is Arendelle’s useless aristocracy. I actually use this as a negative example when lecturing on the medieval nobility in Western Civ. Imagine: the youngest son of a foreign royal family shows up in a kingdom just emerging from a regency and ingratiates himself with the princess who is second in line to the throne. And consider the climax, when Hans, the only person allowed to talk to the severely ill princess, appears and tells the leading men that Anna is dead. Somewhere else. Trust me, bros. And they do.

A real aristocracy would have sniffed out Hans’s intentions in about ten seconds. No aristocracy worth its salt would have missed this, or failed to act against it. They would have sworn oaths to Elsa and her family and had roles to play under her rule and with respect to each other, roles they would fiercely protect. They would have duties and prerogatives. If they had somehow let things get to the point of Hans announcing Anna’s death, they would have demanded evidence. Immediately. He would have been an object of suspicion from beginning to end. A Bismarck, a John of Gaunt, a William Marshal, an Eorl Godwin, or your pick of the Percys, Hohenzollerns, or Carolingians would have eaten Hans alive.

But Arendelle does not have an aristocracy worth its salt. There are only four other men in the room when Hans makes his bid for control and one of them is a foreign diplomat. The rest are nameless drones in uniforms and sashes. This curiously empty kingdom must be either an absolute monarchy, with Elsa at the top and no mediating ranks between her and the people, or have an unseen, unmentioned parliament that has reduced the monarch to a figurehead—which I strongly doubt, if Elsa’s throne is as desirable as Hans thinks it is.

You could try to excuse this as the necessary simplicity of a children’s film, but children’s films don’t have to be simple. It’s more a cliche born of a typical American incuriosity regarding nobility, Americans being incapable of imagining aristocrats as having functions and not just being privileged people who are excusable as targets of scorn and envy. Frozen’s feminist underpinnings are also a factor, feminist ideology—whatever the movement’s other merits—being a universal machine for making complex reality stupidly oversimplified. Google Prince Hans and see how often the cliche “toxic” comes up. He’s a powerful man and other powerful men are just going to trust him and follow him.

Again, study history, even a little bit.

Hans and the Arendelle nobility aren’t just unrealistic—though it’s fun to nitpick and, when I point this out in class, to see students recognize it as a flaw based on what we’ve learned about the past. The real problem is that the combined lack of imagination and ideological cliche evidenced in Hans weaken the story. Like the dam in Frozen II, he’s there to make a point and reinforce a message, not to live and breathe.

A real aristocracy—the kind that patronized the courtly love poets and commissioned altarpieces and cathedrals—wouldn’t have made this mistake.

Richard Cory and ambiguity

One of my favorite poets is Edwin Arlington Robinson. Though both popular and respected in his day, winning the Pulitzer for poetry three times, he seems largely forgotten now. I suspect this is largely a matter of timing: he mastered traditional form and meter, especially the sonnet and villanelle, just as Pound and Eliot and company were coming along to blow it all up.

Robinson’s skill also makes his tightly constructed verse seem effortless, even conversational. It’s clear and understandable—something else the modern poetry establishment, which came more and more to resemble a clique or cult, won’t abide—and mines powerful emotions from everyday scenes and images. Perhaps his best-known poems in this regard are a series of character sketches describing people from a fictitious New England village: “Reuben Bright,” “Aaron Stark,” “Luke Havergal,” “Cliff Klingenhagen,” and my personal favorite—read it and you’ll get why—“Miniver Cheevy.”

Another favorite, and one of Robinson’s most memorable, challenging, and dark, is “Richard Cory.” Take a minute and read it—I’m going to spoil it.

In sixteen lines, Robinson introduces us to a handsome, elegant, popular, courteous, and, yes, wealthy local gentleman, a man with everything going for him. Envy is perhaps too strong a word for the community’s attitude—Richard Cory is too well respected, if not beloved, to warrant envy—but the anonymous speaker of the poem makes it clear that Richard Cory lives in a world everyone else only aspires to. And then Richard Cory kills himself.

I still feel the shock of the final line all the years later, and the bitter irony with which it reframes the entire preceding poem. There is some ambiguity there—was Richard Cory discontent? ungrateful? depressed?—but the import is fairly clear: money can’t buy happiness, and you never know what troubles afflict someone of seemingly greater privilege than you.

The Simon and Garfunkel version, released on Sounds of Silence in 1966, traffics in a different kind of ambiguity. It’s less than three minutes long—listen to it here.

Paul Simon, in adapting Robinson’s poem, makes some noteworthy thematic changes. Where Robinson began with the impression Richard Cory gave his neighbors on the street and mentions his wealth last, Simon leads off with his wealth and even explains where it came from—an inheritance from his banker father, though we’re told later he owns a factory—highlighting the extent of his property and influence. “He had everything a man could want,” in this version, “Power, grace, and style,” which is the reverse of the human view Robinson gives us. (Simon also updates the outward signs of Richard Cory’s wealth for the swingin’ sixties with “the orgies on his yacht.”)

But the biggest change is the inclusion of a chorus, in which the anonymous speaker of Robinson’s poem, one of Richard Cory’s neighbors, comments on his own situation:

But I, I work in his factory
and I curse the life I’m living
and I curse my poverty
and I wish that I could be (3x)
Richard Cory.

The chorus comes around three times and, on its final repetition, which comes immediately after the announcement of Richard Cory’s suicide, it takes on a powerful irony. Much the way Richard Cory’s fate in the last line of Robinson’s original changes the feeling and meaning of the rest of the poem, in Simon’s lyric version it changes the tone and meaning of the chorus.

This is where the ambiguity arises. Just what kind of envy—certainly the appropriate word here—is the speaker revealing?

If Simon has directly addressed his adaptation anywhere, I haven’t seen it. But an interpretation I’ve run across again and again online takes the final repetition of the chorus to be an admission by the speaker that he wants, like Richard Cory, to kill himself. (This is the interpretation presented in the Wikipedia summary, which cites no sources.)

I don’t think this is correct. For one, it makes the speaker far too individual, where in both Robinson and the rest of Simon’s version the “we” and the “I” stand in for the whole community. It’s also nihilistic in a way I don’t feel jibes with the rest of the song or Simon’s general oeuvre. But, most importantly, I think it has a simpler, more straightforward meaning related to that of the original poem: people don’t learn. The desire for wealth and material comfort lead us to overlook, ignore, or wish away the problems that come with them. We all know money doesn’t buy happiness—it’s a cliche for a reason—but who actually lives as if they know that? Literature and mythology, not to mention real life, are full of people who choose wealth and success knowing it will destroy them.

The yearning-for-suicide reading, which is rooted in an apparent ambiguity, bothers me. I think it’s a misreading of the song, yes, but I also think ambiguity, which can be a valuable tool in the hands of a purposeful artist, is overvalued today. The ambiguous ending is a mainstay of twee arthouse cinema. But ambiguity ceases to be cute when applied to suicide.

While feeling down and exhausted over the last month I’ve been doing a slow reread of Chesterton’s Orthodoxy. Chesterton’s light and frothy reputation is belied by his serious treatment of a subject like suicide. Here he is in Chapter V, “The Flag of the World,” writing forcefully about the deadly sin at the heart of it:

Not only is suicide a sin, it is the sin. It is the ultimate and absolute evil, the refusal to take an interest in existence; the refusal to take the oath of loyalty to life. The man who kills a man, kills a man. The man who kills himself, kills all men; as far as he is concerned he wipes out the world. His act is worse (symbolically considered) than any rape or dynamite outrage. For it destroys all buildings: it insults all women. . . . [H]e is a mere destroyer; spiritually, he destroys the universe.

The power of Robinson’s poem and Simon’s song derives from the assumed heinousness of Richard Cory’s act. That’s why it’s shocking in both. His wealth, personal elegance, and position in life only make it ironic, not less terrible. If Richard Cory’s suicide is just one more option, one a person with far more reasons to be bitter might justifiably desire to take, the entire story loses its meaning and weight.

Maybe that’s what Simon intended. I don’t know—but it would ruin the song. As good a song as it is, Robinson’s poem, in its structure and its properly used ambiguity, is better, and better for us.

Does it matter if the movie is faithful to the book?

Over the weekend Substack, in its mysterious way, showed me a month-old note by a literary critic I follow and respect. Since this is a month old and there was already some debate along these lines in the comments, I’ll share and gloss it anonymously:

It doesn’t matter if the film is faithful to the book.
It’s a film! Judge it as a film.
And anyway, you cannot faithfully turn prose into film.
It’s an affront to literary genius to think otherwise.

I’m not actually sure what the last line is supposed to mean. How does holding a filmmaker to a high standard when adapting a writer’s work degrade the writer? But I strenuously object to the rest of it.

To work backwards, the critic here is asserting that the difficulty of adaptation from one medium into another actually makes it impossible—“you cannot faithfully” adapt from book to film, he says. An appalling oversimplification. What does he mean by “prose,” here? When we talk about how a book is adapted into a film and the film isn’t faithful, we might mean it fails with regard to one or more of the following:

  • The literal events of the book

  • The overall story arc of the book

  • Particular details of the settings and/or characters

  • The narrative structure of the book

  • The meaning or thematic import of the book

  • The tone of the book

I’ve tried to arrange that list from simplest to most complex. The events narrated in a story are the easiest to get on screen. The meaning, what the author is apparently both getting out of the story and trying to share through it, and the tone of his storytelling are much harder. We’ve probably all seen movies that more or less adapted a book’s events without capturing the immaterial elements that give the book personality. A Handful of Dust, a quite literal adaptation of the great Waugh novel, comes to mind, as does the John Wayne True Grit. But other films might deviate here and there from the original while nailing its tone and moral register. The Coens’ No Country for Old Men and True Grit, both of which capture most of the events of their respective novels while, much more importantly, faithfully adapting their tones, are masterpieces in this regard.

All of this, according to our critic, is just “prose,” which “cannot faithfully” be made into a film. Cannot. This is not only oversimplified but wrong. Adaptation is difficult, but that we want to judge faithfulness at all indicates that it can be done, and can be done well.

Our critic is on firmer ground in asserting that films and books should be judged by different artistic standards, but this is common sense. Novels and movies tell stories in different ways and may or may not do so well, of course. But—still moving backwards—to assert a novel and its film adaptation are so separate that “it doesn’t matter” whether the adaptation is true to the book is foolishness.

Of course it matters. It matters because if a film adaptation of a book exists it exists because of the book. If a movie presumes to share a title with an author’s book, if it is meant to please readers of the book at all and not to be purely parasitic on the writer’s work and readership—we’re all familiar with the term cash-grab by now—the filmmakers owe it to the book to be faithful in at least some of the areas listed above. And having established that faithfulness is not, in fact, impossible, they owe it to the original to try.

I think it also matters because this kind of talk about the difficulty or impossibility of faithful adaptation has far too often served as an excuse for vandalism. Some vandalism originates with filmmakers contemptuous of their literary source material and wanting to drag it down to their level. Some comes from filmmakers who hubristically think they can improve on great literature. But perhaps the most common problem is the filmmaker with neither contempt nor reverence for the original, who sees it only as raw material to be reworked according to his preferences. It’s all content, after all.

This was my problem with two of the worst film adaptations I’ve seen in the last few years, The Green Knight and All Quiet on the Western Front, both of which—if you look at my reviews—I tried to judge on their merits as films while also noting their utter failure as adaptations. They don’t adapt the events, characters, meaning, or tone of the originals even a little bit faithfully. Are we to give them a pass because they have nice cinematography? Because they try to flatter our present assumptions?

There are other reasons to demand faithfulness of a film adaptation—the movie may be the one and only time many viewers, especially students, encounter any version of an author’s story—but these, I think, are the strongest. There is room for debate, of course. Arguments about whether and how Peter Jackson succeeded in adapting The Lord of the Rings, for example, have been fruitful for an appreciation of both the film trilogy and the novel. But handwaving even the possibility of faithfully adapting a book is bad for both.

A film might be just a film, but a film based on a book exists in relation to that book. If an author cared enough to write it and readers cared enough to read it, filmmakers owe them something more than apathy, hubris, or contempt. So do critics.

Goodreads Inferno

In a longish state-of-the-publishing-world essay on Substack, independent publisher Sam Jordison gives special consideration to the disappearance of the negative book review—the hatchet job—as a symptom of decline. He notes that author and critic DJ Taylor, whose excellent guide to Orwell I wrote about here last year, described the disappearance of “tough-minded” reviews, criticism that “often bordered on outright cruelty,” ten years ago. According to Jordison, the tepid positivity of book review pages has only worsened since then.

What caught my attention was Jordison’s second mention of Taylor’s phrase “outright cruelty,” which Jordison notes we shouldn’t want or need to come back: “We have Goodreads for that.” This observation is glossed with the following footnote:

Goodreads has risen just as professional book pages have declined. The nastiness and ignorance on display there is a reflection of internet culture, and the way everything Jeff Bezos touches is infected with his mean spirit. But I do also wonder if some people think they are restoring some kind of balance?

The nastiness on Goodreads is well known. Goodreads users mob and harrass authors over single lines, engage in character assassination, try to preemptively get books canceled before they’re even published, and even the authors who use Goodreads join in the bad behavior. Imagine the vitriol of Twitter, the politics of Tumblr, and the righteous self-assurance of a school librarian in a Subaru and you have the predominant tone of Goodreads today.

Thanks to the nastiness the profound ignorance on Goodreads is perhaps less visible. But as it happens, it was fresh on my mind because this morning, as I searched for a brand new one-volume edition of The Divine Comedy that I’m about to start reading, I made the mistake of looking at its top review.

According to the user responsible, Dante has written this “OG” “self-insert bible [sic] fanfiction” because he “thanks he is very special” (stated twice), “has a bit of a crush . . . on both Beatrice,” “his dead girlfriend,” and “his poetry man crush” Virgil, and wants “to brag about Italy and dunk on the current pope.” All of this is wrong, for what it’s worth, but here’s the closing paragraph:

TLDR: Do I think everyone should read this? No, it’s veryyyyy dense. But I think everyone should watch a recap video or something to understand a lot of famous literary tropes that become established here.

Read The Divine Comedy for the tropes. Or better yet, “watch a recap video.”

This is a five-star review, by the way.

I wish this were the exception on Goodreads, but it’s not. Here’s a person with the capacity and the patience—perhaps? the review is short on details of anything beyond Inferno—to read the Comedy but who is utterly unprepared to receive and understand it, presumably having lost the good of intellect. This review reads like those parody book review videos that were popular a decade ago, except Thug Notes actually offered legitimate insight as well as laughs.

I have a love-hate relationship with Goodreads. I signed up fourteen years ago and still use it every day. But I can only do so and maintain my sanity by sticking to my tiny corner of online acquaintances and people I actually know and avoiding the hellscape of popular fiction, where the fights that can break out in review comment sections resemble nothing so much as Dante’s damned striving against each other even in death. Finding a legitimate, thoughtful, accurate review is harder than ever. One must dig, sometimes through hundreds of reviews like the one above, to find something helpful. And it’s even harder if you’re interested in older books, for which the temptation toward glibness or snark—omg so outdated! so racist! so sexist!—is for many irresistible.

And, for authors whose books are on Goodreads, it’s hard not to let a latent anxiety build up. Sometimes it feels like, inevitably, it’ll be your turn in the crosshairs.

Jordison blames Jeff Bezos, who he correctly points out—as I just did in my Tech & Culture class last week—started selling books not because he loves them but because they’re easy to catalog and ship. I’m sure that’s a factor, but it’s not sufficient to explain the whole problem. His other culprit, “internet culture,” that broad and protean devil, plays a crucial role as well. Regardless, Jordison ends his essay on a note of hope:

But I don’t counsel despair. Because the truth is that there is still good work being done. There are a few decent book sections left. Writers are producing fine books. Publishers are bringing them into the world. People are reading them.

At least some of those books will endure.

Truly encouraging to remember. But that this must happen despite rather than because of the technologies we’ve created from an ostensible love of books is a judgment on our culture.

Me and the Southern accent

Last month on his microblog, Alan Jacobs linked to this short Atlantic piece—now paywalled—about the slow extinction of the Southern accent. Quoting the author of the essay on the decline of distinctive Southern accents among the young and the eventual reality that the accents will only survive among the old in out-of-the-way places, Jacobs noted, “I’m part of the trend too: I certainly have a Southern accent, but it’s not as pronounced as it was when I was younger, and I profoundly regret that.”

Likewise and likewise. The regret is painful.

My speech, like Jacobs’s, is identifiably Southern to outsiders, but largely through syntax (e.g. double modals), vocabulary (e.g. y’all, fixing to), and peculiarities of emphasis (e.g. saying umbrella instead of umbrella). My accent, in terms of pronunciation, is limited to ineradicable features like the long I noted in that Atlantic essay, yod-dropping, hanging on to the H in wh- words, and the occasional dropped G. I have neither a drawl nor a twang.

This is a regret to me because I feel it severs me from previous generations and the place I come from in one of the most fundamental ways. We learn speech at our mother’s breast and from those closest to us, not only in terms of family but in physical proximity. Gradually losing that means losing a part of me that participates in them and in home.

And I cherish those accents—of which The Atlantic rightly notes there are many. I learned two kinds of Georgia accent growing up. My dad’s parents, natives of Clarke County and the Athens area, spoke a lot like Flannery O’Connor—a Savannah native with her own peculiarities of pronunciation—does in this recording, a soft, non-rhotic accent that outsiders read as genteel. My maternal grandparents, Rabun County natives, spoke in a strongly rhotic accent with heavy Appalachian features. Both of these are from “north Georgia,” broadly speaking, but couldn’t be more different. Southern accents have immense county-by-county variety.

Generation adds more variation. My parents’ accents, both still marked by their parents’ roots, nevertheless grew toward each other, and my own is a yet finer blend—dominated by my maternal side’s Appalachian terseness. It comes out when I try to say iron (arn) or Florida oranges (Flarda arnjes).

In old home movies I have a shrill, squeaky, very country little voice. I’m not sure when the most obvious marks of family and home began to fall away, but it must have been around middle or high school. Unlike the writer in The Atlantic, it was never intentional. I never wanted to blend in, was never ashamed of being Southern—far from it, I grew a sizable chip on my shoulder during an undergrad career surrounded by Yankees and Midwesterners who thought nothing of moving South and mocking the locals for saying umbrella—and, if anything, I wanted more of an accent than what I ended up with.

Faking it is not, I decided long ago, an option. Better to let it emerge occasionally, a nice surprise. (I’ve noticed myself, in the classroom, pronouncing opportunity without the R lately, a real surprise.) I try to comfort myself with examples of other provincials who unintentionally lost their accents—namely CS Lewis, a Belfast native who, quite unconsciously, slowly conformed to the speech of whomever surrounded him and ended up sounding like this.

But when I remember my grandparents’ voices, and talk to my parents and aunts and uncles and siblings, and think about those home movies, and then recall my own kids’ sweet speech—in which very little Southern remains—all I can do is regret. Time isn’t the only thing that gets away from us. And this, the Ubi sunt? sense of loss, is perhaps the only thing more Southern than the accent I used to have.