Scruton on what children can teach us about art

From the late Sir Roger Scruton’s documentary “Why Beauty Matters”:

 
Art needs creativity, and creativity is about sharing. It is a call to others to see the world as the artist sees it. That is why we find beauty in the naïve art of children. Children are not giving us ideas in the place of creative images, nor are they wallowing in ugliness. They are trying to affirm the world as they see it and to share what they feel. Something of the child’s pure delight in creation survives in every true work of art.
— Sir Roger Scruton
 

Scruton makes this aside as a point of contrast with modern art—which is intentionally insular, confrontational, transgressive, and over-intellectual if not ideological—but in doing so he makes a broader point about what art is and what it’s for. This description of children’s art is also honestly and accurately observed.

I’ve thought of this passage many times over the last few weeks, ever since my eldest son eagerly presented me with a picture he had drawn. It was a pencil and highlighter drawing that showed me holding my youngest son at the dinner table—a picture of his dad and one of his little brothers. It was drawn from life without my noticing, and joy he took both in drawing and giving it to me, the joy in and care taken over the details, including the stubble of my beard, and the simple, straightforward, honest love in the picture itself have stuck with me. My kids have drawn many things for me, but this one in particular struck me as a clear example of Scruton’s “pure delight” in “sharing.”

Last week I tacked it to the wall of my office at school. May any art I create be motivated as purely as my son’s.

“Why Beauty Matters” is worth your while, as I wrote here almost four years ago following Scruton’s death. You can watch the whole thing on Vimeo here.

Scruton on style

Last week I revisited the late Sir Roger Scruton’s Beauty: A Very Short Introduction via audiobook on my commute. It’s an excellent precis of much that is fundamental to his thinking and, true to the subtitle, a wide-ranging introduction to many topics that bear further thought. Here’s one.

From a discussion of the role proportion plays in the creation of vernacular architectures by launching the builder on “a path of discovery” to what “fits” and is “suitable” for each detail in relation to the others in Chapter 4, “Everyday Beauty”:

One result of this process of matching is a visual vocabulary: by using identical mouldings in door and window, for example, the visual match becomes easier to recognize and to accept. Another result is what is loosely described as style—the repeated use of shapes, contours, materials and so on, their adaptation to special uses, and the search for a repertoire of visual gestures.

I like the idea of a style as mastery of a discipline’s “repertoire,” the selective, purposeful use of a shared vocabulary. Scruton’s example is architectural, but he also refers throughout the book to painting, sculpture, cinema, and most especially music. My mind naturally suggested literary style, with its literal shared vocabulary and the many effects and fine shades of meaning that a firm control of English can yield.

Scruton himself raises the idea of control as a component of style in the next chapter, “Artistic Beauty”:

True artists control their subject-matter, in order that our response to it should be their doing, not ours.

True artists control their subject-matter, in order that our response to it should be their doing, not ours. One way of exerting this control is through style . . . Style is not exhibited only by art: indeed, as I argued in the last chapter, it is natural to us, part of the aesthetics of everyday life, through which we arrange our environment and place it in significant relation to ourselves. Flair in dressing, for example, which is not the same as an insistent originality, consists rather in the ability to turn a shared repertoire in a personal direction, so that a single character is revealed in each of them. That is what we mean by style, and by the ‘stylishness’ that comes about when style over-reaches itself and becomes the dominant factor in a person’s dress.

The tension between originality and a common vocabulary and the need for balance is an important topic and one Scruton returns to later in the book, but he continues by introducing another consideration:

Styles can resemble each other, and contain large overlapping idioms—like the styles of Haydn and Mozart or Coleridge and Wordsworth. Or they might be unique, like the style of Van Gogh, so that anyone who shares the repertoire is seen as a mere copier or pasticheur, and not as an artist with a style of his own. Our tendency to think in this way has something to do with our sense of human integrity: the unique style is one that has identified a unique human being, whose personality is entirely objectified in his work.

This passage in particular offers a lot for the writer to think about. Every writer has heroes and idols and role models, other writers whose control over their work has influenced our own technique, consciously or not. This starts young. It’s been more than twenty years since I read Stephen King’s On Writing, but I still remember and think often about this passage:

You may find yourself adopting a style you find particularly exciting, and there’s nothing wrong with that. When I read Ray Bradbury as a kid, I wrote like Ray Bradbury—everything green and wondrous and seen through a lens smeared with the grease of nostalgia. When I read James M Cain, everything I wrote came out clipped and stripped and hard-boiled. When I read Lovecraft, my prose became luxurious and Byzantine.

All of which is, for King, a crucial developmental stage in the writer’s life, one that should be refined through constant reading and writing, so that eventually one is no longer writing in imitation but in “one’s own style.”

But if you’re aware of what you’re doing and working hard at it, particularly in order to achieve a certain specific effect—so that, per Scruton, the readers’ response will be my doing, not theirs—it’s hard not to become anxious that one is working merely in pastiche or even accidental parody. Have I sacrificed my integrity to sound like someone else? Inconsistency doesn’t help. I’ve worried more about this on some projects than others. Why am I confident that I can use tricks learned from Charles Portis but not those from Cormac McCarthy? Food for thought.

I think, naturally, of John Gardner and his description of “mannered” prose, a term he’d certainly have applied to McCarthy. “Mannered” suggests artificiality or phoniness, the lack of integrity Scruton suggests above, which is how every good writer hopes not to come across. But I also think of Elmore Leonard, another author whom I’ve quoted here many times, and who worked hard to make his style the absence of style. Scruton contends that that is impossible:

Style must be perceivable: there is no such thing as hidden style. It shows itself, even if it does so in artful ways that conceal the effort and sophistication . . . At the same time, it becomes perceivable by virtue of our comparative perceptions: it involves a standing out from norms that must also be subliminally present in our perception if the stylistic idioms and departures are to be noticed. Style enables artists to allude to things that they do not state, to summon comparisons that they do not explicitly make, to place their work and its subject-matter in a context which makes every gesture significant, and so achieve the kind of concentration of meaning that we witness in Britten’s Cello Symphony or Eliot's Four Quartets.

This is exactly right, and Leonard would agree. Leonard’s style, which was precisely designed to “conceal the effort and sophistication” of his writing and make it seem effortless, was immediately recognizable because it was distinct from the “norms” described above in particular ways—something Leonard himself noted. Those “norms” or context are the broader shared vocabulary we began with—which gives shape to one’s work through contrast.

And that final sentence on what a firm, controlled, purposeful, precise style can do, using the power of allusion, implicit comparison, the subtle significance of every detail to “achieve . . . concentration of meaning”—is there a writer who wouldn’t die happy having that said of his work?

Melancholy in the outfield

A few weeks ago I revisited a childhood favorite with my own kids. Angels in the Outfield came out when I was ten years old and an enthusiastic baseball fan. I must have watched it fifty or sixty times over the next few years, before I aged out of it and the real-life drama of the mid-90s Braves gently edged it out of my imagination.

What I remembered most about Angels in the Outfield was the comedy, the slapstick baseball action, the standard sports movie joys of becoming a team and winning the big game, and the music. (I noticed, though very young, that composer Randy Edelman’s score had a lot of cues suspiciously similar to his work on the previous year’s Gettysburg, one of my favorite soundtracks.) What I was not prepared for upon rewatching it as an adult just how firmly the plot’s foundation was built upon pain, sorrow, and longing.

Roger, the main character, lives in foster care because his mom has died and his dad is a negligent, uncommunicative deadbeat. When the film starts his father has already signed over his rights to his son and has shown up just long enough to tell Roger, a job he performs badly. Is that guilt we see in his eyes, or just awkwardness in performing the unwanted duty of talking to his child? When an oblivious Roger asks when they can “be a family again,” his dad replies with a “when pigs fly” scenario that Roger takes literally. And Roger’s younger friend JP seems bright and happy all the time but collapses into grief when another boy is moved out of the foster home, an emotional response the movie suggests is always ready just below the surface. This is clearly a child struggling with abandonment.

But the vein of sadness runs through the adults, too. California Angels manager George Knox seethes with grievance, not only having had his career cut short when a dirty player slid into him cleats-first, but also becoming a manager only to be saddled with the worst team in the league. The man who injured him, Ranch Wilder, is now the Angels’ radio announcer and loathes the team as well as Knox. His entire demeanor suggests he resents being kept down when he is meant for greater things. And Mel Clark, a former star pitcher who developed a pain pill addiction under Knox’s managership at Cincinnati and who has the film’s clearest redemption arc, is revealed at the end to be only six months away from death. He has lung cancer and doesn’t even know it yet. And so even the longed-for victory in the playoffs is tinged with loss.

I’m not going to pretend that Angels in the Outfield is a great movie or serious drama; it’s simply well and honestly crafted and it treats all of these scenarios seriously. None of it feels forced, none of it is used merely to jerk tears, and none of it is tidily and painlessly resolved. In fact, most of the characters don’t actually get the specific thing they want at the beginning of the film.

This brought to mind two things I had reflected on long ago. The first is an essay from Film School Rejects called “The Melancholy of Don Bluth,” an excellent read on animated films like The Land Before Time, All Dogs Go to Heaven, or An American Tail—all three of which were in constant rotation in the Poss household when I was growing up. Bluth’s movies have a reputation for going to dark places Disney typically balks at, to the point that they’re sometimes the subject of internet memes about “trauma.” Please.

The artistic upshot of Bluth’s willingness to include death and—perhaps more importantly—mourning in his films is a truth and richness often missing from comparable animated films:

Thematically, there is an ever-present air of death about Bluth’s work that is profoundly sad. Bones litter certain set-pieces; illness and age are veritable threats (shout out to Nicodemus’ gnarly skeleton hands); and characters can and do bleed. Critically, Bluth films don’t gloss over grief, they sit with it. From Littlefoot’s straight up depression following the on-screen death of his mom, to Mrs. Brisby’s soft sorrow at finding out the details of her husband’s death. There is a space for mourning in Bluth’s stories that feels extra-narrative, and unpretentious. Critically, this is distinct from, say, wallowing. Bluth’s films have a ridiculously productive attitude towards mourning, most lucidly articulated through Land Before Time’s moral mouthpiece Rooter: “you’ll always miss her, but she’ll always be with you as long as you remember the things she taught you.” Disney meanwhile, tends to treat death as a narrative flourish, or worse, a footnote. And in comparison, even notable exceptions like Bambi and The Lion King seem immaturely timid to let palpable grief linger for longer than a scene, let alone throughout a film’s runtime.

The other thing that came to mind was a podcast conversation on The Sectarian Review concerning Hallmark Christmas movies. At some point during the conversation I drew a comparison between Hallmark romantic comedies and older romcoms by pointing out that films like You’ve Got Mail, as fun and bubbly and appealing as they are, also have vein of genuine pain running through them. Kathleen Kelly takes her mom’s little bookshop up against the big chain store and loses, an event the film doesn’t gloss over and doesn’t paint as some kind of moral victory. Who doesn’t feel the pang of her loss as she closes up shop for the final time and walks away into the night, her mom’s shop doorbell jingling in her hand?

Only Pixar, in older movies like Up and Toy Story 2 and Inside Out, has recently attempted to include such real pain in their stories. By comparison, most of the recent crowd-pleasing PG-13 action fare or animated kids’ movies in theatres or the mass-produced dramas of the Hallmark Channel are pure saccharine—thin, fake, and probably carcinogenic.

I have no firm conclusions to draw on this topic except to note that, for whatever reason, even in our simplest and cheapest stories we’ve lost something important. And if you feel some of this and hope for catharsis, one of the oldest reasons for watching a drama that there is, you’ll have to go to older films for it.

The Mysteries

 
‘In our world,’ said Eustace, ‘a star is a huge ball of flaming gas.’
’Even in your world, my son, that is not what a star is but only what it is made of.’
— CS Lewis, The Voyage of the Dawn Treader
 

I feel like the publication of a new book by Bill Watterson, whose “Calvin and Hobbes” ended its run twenty-nine years ago and who has remained almost entirely quiet since, should be more of an event than the release of The Mysteries has proven. But then, given the book’s title and most especially its subject matter, maybe that’s appropriate. Call it a mystery, but not one of the Mysteries.

The story is simple enough. This blog post will probably end up several times longer than the entire book. The Mysteries introduces the reader to a medieval-ish world of castles and half-timber towns in which the people and their king are bounded by dark forest. The forest is the domain of the Mysteries, whom no one has ever seen but everyone knows have terrible powers. At first the people strive not to understand but to protect themselves from the Mysteries, putting huge efforts into building walls and chronicling the long history of their fears in tales and art.

Then one day the king decides to strike back against the Mysteries, dispatching knights into the forest on a quest to capture and bring back a Mystery. After a long stretch of futile searching, one knight succeeds, returning with an iron box chained to a cart.

At last, a Mystery is revealed—and the people discover that there’s not, apparently, very much to them. Their fearful powers turn out to be “mundane.” And capturing one Mystery opens the way to capturing others, to the point that the people not only lose their fear of the Mysteries but come to find them boring. One clever illustration shows a medieval newspaper stall full of headlines like “YAWN.”

Then, the Mysteries understood and no longer feared or the object of much attention at all, the people demolish their walls, cut down the forest, and overspread the land. They mock the old paintings inspired by the Mysteries. They now live in a world of jet aircraft and skyscrapers and the king no longer appears on the balcony of his castle but on TV or behind the wheel of a car on a busy freeway, drinking a Big Gulp. At last, the narrator tells us, they control everything.

Or do they? The sky turns strange colors and, ominously, “things” start “disappearing.” The king assures them that this is normal, wizards study the phenomena, and life continues apace. Then, “too late,” the people realize that they’re in trouble. An indifferent universe wheels on.

In the final pages the viewpoint of the illustrations pulls back farther and farther from the people and their conquered land, into space, beyond the solar system and the Milky Way. “The Mysteries,” the story concludes, “lived happily ever after.”

One notable aspect of The Mysteries is that although Watterson wrote the story, it is illustrated by caricaturist John Kascht. Watterson and Kascht worked on the pictures in close collaboration for several years, experimenting with and abandoning many styles before arriving at an atmospheric, unsettlingly dreamlike aesthetic combining clay figures, cardboard scenery, and painted backdrops. The effect is powerfully eerie, especially as the pace of the story accelerates and the fairytale world at the beginning of the book gives way to one that resembles, disconcertingly, our own.

If the pictures are murky, moody, and ambiguous, often more allusive than concrete, so is the story. This, according to Watterson, is by design. I’m not typically one for deliberate ambiguity, but it works brilliantly here. This “fable for grownups,” as the publisher describes it, achieves a timelessness through its strangely specific soft-focus art and a broad applicability through its theme.

And what is that? The most obvious and easy referent to the consequences the people face in the book’s closing pages is climate change, whether anthropogenic or not. But The Mysteries is not an allegory but a fable. To narrow its message, if it has one, to a policy issue is to cheapen and limit it.

The core theme of The Mysteries is disenchantment. Since the Scientific Revolution uncovered the wheels and levers of the universe and the Enlightenment insisted that the wheels and levers were all there is, was, or ever will be, the mysteries of our own world have retreated further and further from our imaginations and the place we once gave them in our daily lives. The powers that once kept people within their walled towns have been banished—or rather seized and repurposed, put to work for the people’s desires. Fear or, to put it more positively, awe of the world has given place to self-assured technical mastery. We control everything.

Or do we?

The Mysteries is probably not what anyone anticipating the return of Bill Watterson would have expected. I was certainly surprised, but pleasantly. As befits the creator of “Calvin and Hobbes,” a work that prized imagination above all else, The Mysteries treads lightly but surefootedly across deep ideas, and powerfully suggests that whatever Mysteries once lived in the forest, we have not sufficiently understood them to warrant our boredom, apathy, and self-indulgence, and we certainly are not free of them. We are, in fact, in graver danger through our indifference to the Mysteries than we ever were when we feared them.

John Gardner on art and democracy

Yesterday during my commute I revisited a short radio interview with John Gardner, one of the writers and writing teachers I most admire. The entire interview is worth listening to for Gardner’s trenchant comments on, well, everything, but I found the following exchange most striking.

Considering the way “the rise of middle class literature”—a “bad thing” in Gardner’s view—was satirized by Henry Fielding and Daniel Defoe, interviewer Stephen Banker goes back to Gardner’s preference for premodern work like Beowulf or Dante or Chaucer and his belief that literature has decreased in quality since then:

Banker: There’s so much in what you said. First of all, are you seriously suggesting that the literature of the aristocracy is the right kind of literature?

Gardner: Yeah, sure, sure. And I think that, as a matter of fact, I don’t think that’s snobbism, I think that every kid in a democracy would like that literature better if he knew it. But of course the thing that happens in a democracy is that the teachers lose touch with what’s good—they don’t know, you know? How many art teachers, you know, in ordinary public schools, have been to an art museum? Just that. How many teachers of creative writing in high schools and colleges for that matter really know what the Iliad is about? I’ve talked with an awful lot of professors. I think there are a handful of people in America who understand the poem Beowulf. And I don’t think there’s even a handful in England. It’s just lost knowledge.

Banker: Well, what—

Gardner: I don’t know anybody who knows about Dante! I don’t know a single person who understands what Dante is doing. I don’t mean that as arrogance, it’s just a fact. They read little sections of it, they talk about the dolce stil nuovo, that’s all.

The reading of great literature in context-free excerpt with a primary focus on formal or—increasingly—political qualities still rings true, as does the well-expressed observation that kids even in democracies will prefer to the adventure of aristocratic literature to middle-class realism. The problem comes in the line “if he knew it.” Many kids today are deprived, often for ideological rather than artistic reasons, and I can see their thirst for this kind of storytelling anytime I describe, in detail and for its own sake, a work of ancient or medieval literature to a class of students. They respond.

I do think there is more cause for hope than Gardner suggests—consider the wave of relative popularity greeting Emily Wilson’s recent translations of Homer—but the situation is dire.

Banker next moves the discussion on to whether old literature is still relevant in a more technologically sophisticated world and Gardner comes out swinging, while also rounding out some of his statements above:

I don’t think that’s snobbism, I think that every kid in a democracy would like that literature better if he knew it.
— John Gardner

Banker: I think one could make a case—

Gardner: Mm-hm.

Banker: —that things that happened five, six, seven hundred years ago are not really relevant to the way we live now, that those people didn’t live with machinery, they didn’t live in the age of anxiety, they didn’t live with the kind of tensions, the kind of communications we have today.

Gardner: I think that’s probably not true. I think, in fact, that—pick your age, pick the age, for instance, of Alexandrian Greece, with Apollonius Rhodius writing in an overpopulated, effete, decadent society, he writes a book which is a bitter, ironic, very Donald Barthelme-like book in imitation of the epic form but actually making fun of the epic form and expressing, you know, his ultra-modern kind of disgust and despair and all this kind of business.

Banker: And what period are you talking about now?

Gardner: Oh, I don’t know about dates. Third century BC. One can find at the end of every great period decadent literature very much like ours. The difference is that we have for the first time—and it’s a great thing—real democracy, in which everybody can be educated. And as everybody begins to be educated and as everybody begins to say what education ought to be, then education changes, and so that the kind of values which make first-rate philosophy or art or anything else disappear—or become rare, at least. There are obviously lots of writers in America who are still concerned about great art and are trying to create it but, mostly, that’s not true.

Food for thought.

The interview ranges widely and it’s hard not to transcribe large parts of the rest, particularly, in considering the value of fiction, Gardner’s comparison of the way Nietzsche and Dostoevsky attacked the same philosophical problems, the first in abstract aphorism and the second in concretely realized fiction, and why Dostoevsky’s fictional interrogation of the Übermensch was more successful—and truthful.

Listen to the whole thing.

For more from Gardner on what’s great about Beowulf and what’s wrong with modern “realism,” check out this Paris Review interview from 1979, a year after the radio interview above. It’s paywalled but a generous, tantalizing chunk is available to read before it cuts off. I’ve written about Gardner here several times before, most importantly on his concept of fiction as the painstaking creation of a “vivid and continuous fictive dream.” This is a crucial idea to me, one I often reflect on. I also considered the role of sensory detail in Gardner’s “fictive dream” using the example of the novel Butcher’s Crossing here.

Literary cameos

Yesterday Alan Jacobs posted a longish recommendation of Francis Spufford’s latest novel, an alternate history detective noir titled Cahokia Jazz. I’m intrigued. But I especially enjoyed this minor note from the end of Jacobs’s post:

At one point, late in the story, our hero is at Cahokia’s railway station and happens to see a family, “pale, shabby-grand, and relocating with their life’s possessions”—including, curiously enough, butterfly nets: “white Russians on their way to Kodiak, by the look of it.” One of them, “a lanky twenty-something in flannels and tennis shoes,” is called by his family Vovka, and he briefly assists our hero. Then off they go, leaving our story as abruptly as they had arrived in it. Assuming that they made their way to Kodiak—or, more formally, as our map tells us, NOVAYA SIBIRSKAYA TERRITORII—it is unlikely that their world ever knew Lolita or Pale Fire.

This is “one of several delightful cameos” in the novel, and Jacobs’s recommendation and praise got me thinking about such cameos in fiction.

I haven’t read Cahokia Jazz yet, though I intend to, but I’m willing to take Jacobs at his word that Spufford does this well. The example he cites certainly sounds subtle enough to work. But done poorly, such cameos awkwardly shoehorn a well-known figure into the story and call unnecessary attention to themselves. Think Forrest Gump in novel form. They can also, if used to denigrate the characters in the story, turn into the kind of wink-wink presentist authorial irony that I deplore.

I think the best version of the literary cameo functions much like a good film cameo—if you spot the cameo and know who it is, it’s a nice bonus, but if you don’t it doesn’t intrude enough to distract. And, ideally, it will work with and add to the story and characterization of the main characters.

A good and especially subtle example comes from Declare, which I’m almost finished reading. Early in the novel we read of protagonist Andrew Hale’s background, specifically where he was in the early stages of World War II before embarking on his first espionage assignments in occupied France:

In November he successfully sat for an exhibition scholarship to Magdalen College, Oxford, and in the spring of 1941 he went up to that college to read English literature.

His allowance from Drummond’s Bank in Admiralty Arch was not big enough for him to do any of the high living for which Oxford was legendary, but wartime rationing appeared to have cut down on that kind of thing in any case—even cigarettes and beer were too costly for most of the students in Hale’s college, and it was fortunate that the one-way lanes of Oxford were too narrow for comfortable driving and parking, since bicycles were the only vehicles most students could afford to maintain. His time was spent mostly in the Bodleian Library researching Spenser and Malory, and defending his resultant essays in weekly sessions with his merciless tutor.

A Magdalen College tutor ruthlessly grilling a student over Spenser and Malory? That can only be CS Lewis.

They’re not precisely cameos, but I have worked a few real-life figures into my novels in greater or lesser supporting roles: David Howarth in Dark Full of Enemies, Gustavus W Smith and Pleasant Philips in Griswoldville. I’ve aimed a little lower in the name of realism, I suppose. But the precise dividing line between a cameo of the kind described here and a real person playing a serious role in a story is something I’ll have to figure out.

At any rate, a well-executed literary cameo is a joy. Curious to see who else might surprise us in the pages of Cahokia Jazz.

Further notes on Indy and Oppie

July was a big movie month here on the blog, with three reviews of movies ranging from “adequate compared to Kingdom of the Crystal Skull” to “great.” Two of them I’ve reflected on continually since seeing them and reviewing them here, especially as I’ve read, watched, and listened to more about them.

Here are a few extra thoughts on my summer’s movie highlights cobbled together over the last couple of weeks:

Indiana Jones and the Curse of Woke

When I reviewed Indiana Jones and the Dial of Destiny a month and a half ago, I didn’t dwell on the malign influence of woke ideology in its storytelling, only mentioning that I had justifiable suspicions of any Indiana Jones film produced by Disney. I wanted to acknowledge those doubts without going into detail, because after actually watching and, mostly, enjoying the movie, I found that the problems I had with Dial of Destiny weren’t political at all, but artistic. It isn’t woke, it’s just mediocre.

That didn’t stop a certain kind of critic from finding the spectral evidence of wokeness in the film and trumpeting their contempt for it. I’m thinking particularly of a caustic YouTube reviewer I usually enjoy, as well as this review for Law & Liberty, which comes out guns blazing and attacks Dial of Destiny explicitly and at length along political lines.

The problem with these reviews is that in their hypersensitivity and their mission to expose ideological propaganda they do violence to the object of their criticism, not just misinterpreting things but getting some thing completely wrong. Here’s a representative paragraph from that Law & Liberty review:

Next, we cut to 1969, the Moon Landing. Indy is an old tired man, sad, alone, miserable. The camera insists on his ugly, flabby naked body. His young neighbors wake him up with their rock music and despise him. His students don’t care about his anthropological course. His colleagues give him a retirement party and soon enough they’re murdered, by Nazis working secretly in the government, with the complicity of the CIA or some other deep state agency. We see the wife is divorcing him; we later learn, it’s because his son died in war, presumably Vietnam—Indy told the boy not to sign up.

What was remarkable about this paragraph to me is how much it simply gets wrong. Indy’s hippie neighbors wake him up by blasting the Beatles, yes, but they also treat him perfectly amiably. (In fact, it’s Indy who knocks on their door armed with a baseball bat.) It is never clear that Voller’s men have help from the CIA or any other “deep state agency;” I kept waiting for that connection but it never came. And Indy did not try to stop his son from joining the army, a point made so clear in the film—Indy’s one stated wish, were time travel possible, would be to tell him not to join—that it’s staggering to think a critic went to print with this.*

From later in the same review: “But turning from obvious metaphors to ideology, Indy is replaced by a young woman, Helen [sic—her name is Helena], daughter of his old archaeological friend Basil, but the film suggests you should think of her as a goddess to worship.” One of my chief complaints about Dial of Destiny was its failure to deal with Helena’s criminality, giving her a half-baked or even accidental redemptive arc that spares her a face-melting, as befitted all similar characters in Indy’s inscrutable but always moral universe. That bad writing again. But how one could watch her character in action and conclude that the audience is meant to “worship” her is beyond me. This is anti-woke Bulverism.

What these hostile reviewers describe is often the opposite of what is actually happening in the film. I’ve seen multiple critics assert that Helena has “replaced” Indy and “controls” and “belittles” him. The Law & Liberty reviewer describes Indy as just “along for the ride.” Helena certainly intends to use him—she’s a scam artist and he’s a mark. This is all made explicit in the film. But it is also made explicit that Indy does, in fact, keep taking charge and leading them from clue to clue and that he is much a tougher mark than Helena was counting on.

Dial of Destiny’s actual problems are all classic artistic failures—poor pacing, overlong action sequences, plodding exposition, weak or cliched characters,** slipshod writing, and a misapprehension of what matters in an Indiana Jones movie that becomes clearest in the ending, when Indy is reunited (for the third time) with Marion. Here the filmmakers make the same mistake as the team behind No Time to Die by giving Indy, like Bond, romantic continuity and attempting to trade on sentimentality when that is not what the character is about.

Again—these are artistic problems. Helena Shaw isn’t a girlboss or avenging avatar of wokeness; she’s a poorly written villain who doesn’t get her comeuppance. But I saw little such criticism among the fountains of indignation from the reviewers who pursued the “woke Disney” line of criticism.

Perhaps this is the greatest curse of wokeness: that it distorts even its critics’ minds. Once they’ve determined that a movie is woke, they’ll see what they want to see.

Call it woke derangement syndrome and add it to all the other derangement syndromes out there. Woke ideology is real, even if the ordinary person can’t define it with the precision demanded by a Studies professor or Twitter expert, and it is pernicious, and it produces—even demands—bad art. It is a kind of self-imposed blindness, as are all ideologies. But zeroing in on wokeness as the explanation for bad art can blind us to real artistic flaws, and if any good and beautiful art is to survive our age we need a keen, clear, unclouded vision of what makes art work. We need not just a sensitivity to the bad, but an understanding of the good.

Douthat on Oppenheimer

On to better criticism of a better movie. Ross Douthat, a New York Times op-ed columnist who writes film criticism for National Review, has been one of my favorite critics for the last decade. Douthat begins his review of Oppenheimer with an abashed confession that he feels guilty saying “anything especially negative about” it, but that as brilliantly executed as it is, he is “not so sure” that it is “actually a great film.”

Fair enough. What gives Douthat pause, then? For him, the problem is Oppenheimer’s final third, which he sees not as a satisfying denouement but simply a long decline from the height of the Trinity test, a decline complicated by thematic missteps:

There are two problems with this act in the movie. The first is that for much of its running time, Oppenheimer does a good job with the ambiguities of its protagonist’s relationship to the commonplace communism of his intellectual milieu—showing that he was absolutely the right man for the Manhattan Project job but also that he was deeply naïve about the implications of his various friendships and relationships and dismissive about what turned out to be entirely real Soviet infiltration of his project.

On this point I agree. As I wrote in my own review, I thought this was one of the film’s strengths. Douthat continues:

But the ending trades away some of this ambiguity for a more conventional anti-McCarthyite narrative, in which Oppenheimer was simply martyred by know-nothings rather than bringing his political troubles on himself. You can rescue a more ambiguous reading from the scenes of Oppenheimer’s security-clearance hearings alone, but the portions showing Strauss’s Senate-hearing comeuppance have the feeling of a dutiful liberal movie about the 1950s—all obvious heroes and right-wing villains, no political complexity allowed.

The second problem, as Douthat sees it, is that the drama surrounding Oppenheimer’s political destruction and Strauss’s comeuppance is unworthy of the high stakes and technical drama of the middle half of the movie concerning the Manhattan Project: “I care about the bomb and the atomic age; I don’t really care about Lewis Strauss’s confirmation, and ending a movie about the former with a dramatic reenactment of the latter seems like a pointless detour from what made Oppenheimer worth making in the first place.”

There is merit here, but I think Douthat is wrong.

I, too, got the “dutiful liberal” vibe from the final scenes, but strictly from the Alden Ehrenreich character. Ehrenreich is a fine actor unjustly burdened with the guilt of Solo, but his congressional aide character’s smug hostility to Strauss as Strauss is defeated in his confirmation hearing feels too pat, too easy. It’s Robert Downey Jr’s sympathetic and complicated portrayal of Strauss, not to mention the fact that the film demonstrates that, however Strauss acted upon them, his concerns about espionage and Oppenheimer’s naivete were justified, that saves the film from simply being standard anti-McCarthy grandstanding.***

Regarding the seemingly diminished stakes of the final act, I too wondered as I first watched Oppenheimer whether Nolan might have done better to begin in medias res, to limit himself strictly to the story of the bomb. But that story has already been told several times and Oppenheimer is very much a character study; this specific man’s rise and fall are the two necessary parts of a story that invokes Prometheus before it even begins.

The key, I think, is in the post-war scene with Oppenheimer and Einstein talking by the pond at Princeton. Nolan brings us back to this moment repeatedly—it’s therefore worth paying attention to. The final scene reveals Oppenheimer and Einstein’s conversation to us:

Oppenheimer: When I came to you with those calculations, we thought we might start a chain reaction that would destroy the entire world.

Einstein: I remember it well. What of it?

Oppenheimer: I believe we did.

Cue a vision of the earth engulfed in flames.

A technology that can destroy the entire world is not just the literal danger of Oppenheimer’s project, but a metaphorical one. The Trinity test proves fear of the literal destruction of the world unfounded, but the final act of the film—in which former colleagues tear each other apart over espionage and personal slights and former allies spy and steal and array their weapons against each other and the United States goes questing for yet more powerful bombs, a “chain reaction” all beginning with Oppenheimer’s “gadget”—shows us an unforeseen metaphorical destruction as it’s happening. The bomb doesn’t have to be dropped on anyone to annihilate.

This is a powerful and disturbing dimension of the film that you don’t get without that final act.

Finally, for a wholly positive appraisal of Oppenheimer as visual storytelling—that is, as a film—read this piece by SA Dance at First Things. Dance notes, in passing, the same importance of the film’s final act that I did: “The two threads are necessary to account for the political paradox of not just the a-bomb but of all technology.” A worthwhile read.

Addenda: About half an hour after I posted this, Sebastian Milbank’s review for The Critic went online. It’s insightful well-stated, especially with regard to Oppenheimer’s “refusal to be bound” by anyone or anything, a theme with intense religious significance.

And a couple hours after that, I ran across this excellent Substack review by Bethel McGrew, which includes this line, a better, more incisive critique of the framing narrative than Douthat’s: “This is a weakness of the film, which provides all the reasons why Oppenheimer should never have had security clearance, then demands we root against all the men who want to take it away.”

Tom Cruise does the impossible

The most purely enjoyable filmgoing experience I had this summer was Mission: Impossible—Dead Reckoning, Part I. To be sure, Oppenheimer was great art, the best film qua film of the summer, but this was great entertainment. I enjoyed it so much that, after reviewing it, I haven’t found anything else to say about it except that I liked it and can’t wait for Part II.

Leaving me with one short, clearly expressed opinion—a truly impossible mission, accomplished.

Endnotes

* In fairness, the review has one really interesting observation: in reference to the film’s titular Dial being Greek in origin, unlike the Ark of the Covenant or the Holy Grail, “Jews are replaced by Greeks in the Indiana Jones mythology, since our elites are no longer Christian.” The insight here is only partially diminished by the fact that the elites who created Indiana Jones were not Christian, either. Steven Spielberg, Philip Kaufman, and Lawrence Kasdan—key parts of Raiders—are all Jewish.

** Here is where Dial of Destiny drifts closest to woke characterization. The agents working for Voller in the first half include a white guy in shirt and tie with a crew cut and a thick Southern accent and a black female with an afro and the flyest late 1960s fashion. Which do you think turns out to be a devious bad guy and which a principled good guy? But even here, I don’t think this is woke messaging so much as the laziness of cliché. Secondary characters with Southern accents have been doltish rubes or sweaty brutes for decades.

*** A useful point of comparison, also involving a black-and-white Robert Downey Jr, is George Clooney’s engaging but self-important Good Night, and Good Luck. Watch both films and tell me which is “all obvious heroes and right-wing villains.”

A thesis

The following started as only semi-serious off-the-cuff pontification in my Instagram “stories.” I’ve expanded on it and fixed a lot of autocorrect “help” along the way.

A favorite web cartoonist, Owen Cyclops, shared the following on Instagram this morning:

If you’re unfamiliar with semiotics, which I discovered via Umberto Eco late in high school, here’s the first bit of Wikipedia’s intro:

Semiotics (also called semiotic studies) is the systematic study of sign processes (semiosis) and meaning making. Semiosis is any activity, conduct, or process that involves signs, where a sign is defined as anything that communicates something, usually called a meaning, to the sign's interpreter. The meaning can be intentional, such as a word uttered with a specific meaning; or unintentional, such as a symptom being a sign of a particular medical condition.

The phrase “usually called a meaning” should give you some sense of how arcane, abstract, and high-falutin’ this can get. Emphasis on abstract. But semiotics is not really my point, here. Owen’s cartoon brought Dr Johnson’s refutation of Berkeley to mind. Per Boswell:

After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the non-existence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it, “I refute it thus.”

This is the “appeal to the stone.” Wikipedia classifies it as “an informal logical fallacy.” I don’t care. When confronted with academic disciplines that have descended to this level of abstraction, I join Dr Johnson’s stone-kicking camp.

At some point, something has to be real. Argument divorced from concrete reality simply turns into sophisticated dorm room bickering.* That’s what Owen’s cartoon captures so well—argue about the “meanings” of “signs” like carrot tops and foxholes all you want, the real carrot and the real fox are going to present an inarguable ultimate meaning to those rabbits. I refute it thus.

I was struck that Wikipedia’s article on Johnson’s stone-kicking compares this appeal to the reductio ad absurdum, which it also treats as a fallacy. Its full article on the reductio is more circumspect, classifying it as a legitimate line of argument, though I’ve always regarded the reductio more as a useful rhetorical device, a way of comically** setting the boundaries to an argument or of twisting the knife once the logic has worked itself out as impossible. But, tellingly, the article’s “see also” points us toward slippery slope. This is, of course, described not just as an informal fallacy but “a fallacious argument.” I contend that slippery slope is not a fallacy but, at this point, an ironclad empirical law of Western behavior.

And that’s what brought the late Kenneth Minogue to mind. In my Western Civ courses I use a line from his Politics: A Very Short Introduction, to impart to students that the Greeks and Romans were different from each other in a lot of fundamental ways. Chief among these differences was the Greek and Roman approach to ideas:

The Greek cities were a dazzling episode in Western history, but Rome had the solidity of a single city which grew until it became an empire, and which out of its own decline created a church that sought to encompass nothing less than the globe itself. Whereas the Greeks were brilliant and innovative theorists, the Romans were sober and cautious farmer-warriors, less likely than their predecessors to be carried away by an idea. We inherit our ideas from the Greeks, but our practices from the Romans.

Succinct, somewhat oversimplified, sure, but helpful to students who mostly assume the Greeks and Romans were the same, just with redundant sets of names for the same gods. It’s also correct. Minogue goes on to note that this mixed heritage manifests differently culture to culture, state to state, but that “Both the architecture*** and the terminology of American politics . . . are notably Roman.”

Were, I’d say.

So, a thesis I’ve kicked around in conversation:

Given Minogue’s two categories of classical influence, as the United States was founded along (partially but significantly) Roman lines by men who revered the Romans, a large part of our cultural upheaval has arisen as the country has drifted more Greek—becoming progressively more “likely . . . to be carried away by an idea.”

The emphasis has shifted from the Founders’ “Roman” belief in institutions governed by people striving for personal virtue to a “Greek” pattern of all-dissolving ideologies pursuing unachievable ends. This reflects both political and social changes. Like Athens, the US became more aggressive and more inclined to foreign intervention the more it embraced democracy not just as a system but as an end. And note the way that, when an ideal butts up against an institution in our culture, it’s the institution that’s got to go—as does anything that stands in the way of the fullest possible fulfilment of the implicit endpoint of the ideal. How dare you impede my slide down this slope, bigot.

And this is not a new problem. A whole history of the US could be written along these lines.

* During my senior year of college I once listened to two roommates argue over whether the Trix Rabbit was a “freak of nature.” This lasted at least an hour. Take away the humor and you’d have enough material for several volumes of an academic journal.

** Comically, because what’s the point of arguing if you can’t laugh the whole time? That’s not an argument, but a quarrel. See note above.

** Not always for the best, as I’ve argued before.

Against content

In the latest episode of “Half in the Bag,” during an interlude regarding The Whale, streaming entertainment, and the recent cancelation of shows with low viewership, hosts Mike and Jay enter their weary satirical mode:

Mike: [zombie-like] Watch the programs on your TV.
Jay: [laughs]
Mike: Watch programs. Watch the movies that we talked about, I guess, or don’t. Watch the programs. Programs!
Jay: Content!
Mike: Content!
Jay: Content! It’s not “movies” anymore, it’s “content.”
Mike: Watch those contents.
Jay: It’s not TV shows, it’s content. “Willow” didn’t make enough money with its content so they canceled it. Move on to next content. “Mandalorian” season three is not doing as well as season two.
Mike: Bring back Grogu.
Jay: Bring more content. Give me content.

And, finally:

Mike: [demonic, surrounded by flames] CONSUME MORE CONTENT.

They put it even more succinctly a few years ago.

If there are any themes to speak of in this hodgepodge of a blog, this commonplace book, one of them is surely the idea that not only the things we talk about but the way we talk about them matters. This applies not only to our meaning but to the individual words with which we express it. Words have meaning. They should be used precisely and with care. This isn’t pedantry. As George Orwell argued in an essay I’ve invoked here many times, sloppy language leads to sloppy thinking. And vice versa.

The trends Orwell and others in his day noted have continued uninterrupted. We’re living at the sharp end of a long period of careless, apathetic imprecision in how we speak, write, and think. To be precise: a decline. The signs are everywhere. But nowhere is the sloppiness, vagary, imprecision, and muddle more pronounced than in the way people talk about art and creativity. And again, this can be seen most clearly at the vocabulary level—words.

The movies are particularly vulnerable to the rot, especially in popular discussion of the unstinting flux of superhero movies, remakes, video game adaptations, and streaming series. “Franchises” and “IP”—business terms that stink of the boardroom and the copyright lawyer—are commonplace ways to talk about movies now.

But the vilest, the stupidest, the most insidious and invasive of all of today’s sloppy art language is “content.”

A word that, like franchise and IP, began as a lowest common denominator legal term is now the default among even the general public. Instagram and YouTube users tell their favorite photographers and video essayists “Great content” and “Quality content” and “I love your content!” All of which are apparently meant to be compliments.

A book has a table of contents to tell you the most important thing about itself—what precisely is in it, what specifically you can expect. Ditto the lists of contents on medicine bottles, shipping containers, and boxes of Legos. But as used today content means nothing more than “stuff.” Everything is content. Novels and short stories are content. Movies and YouTube videos are content. Photos are content. Music is content. Book reviews and blog posts and longform essays are content. The news is content. If all of these things and more can be called by the same word, the word is useless.

I am writing a blog post right now. This morning I wrote announcements for my students and e-mails for my colleagues. Last night I passed the 30,000-word mark on a short novel. This week I revised and submitted an epic poem to an online magazine. A few weeks ago I drafted, revised, and submitted a short story to another. At supper the other night I drew Puss in Boots on a napkin for my son. Are all these things just so much content?

Once upon a time, art was specific. We described it with a huge and sometimes highly specified vocabulary. It was rich in specific nouns and precise verbs. Writers and journalists wrote stories. Authors wrote novels. Musicians composed or improvised thousands of kinds of music and played hundreds of different instruments. Poets composed poems—or, if you go back even further, they shaped songs. The vast team of the film crew wrote, directed, blocked, lit, costumed, miked, slated, shot, cut, and printed whatever part of the film fell within their prerogative. What do creative people do now? They “produce” “content.”

Produce, like a factory. Mechanically, seemingly automatically, with no single person to credit and in great quantities. Should we be shocked that the quality has suffered? My colon produces content.

And what do you do with content? Verbs again. We don’t read, watch, look at, listen to, or even think about content. Sure, all of those faculties are engaged on some low, barely involved, power-saving level, but what we do is consume it. Like a fire, a monster, a glutton, a plague of locusts, or a wasting disease. I have actually heard living human beings use one of the handful of breaths they get in their one precious finite life to begin a sentence with, “When I consume content…”

I am become content, the destroyer of worlds.

“So what?” I imagine lots of people saying. “That’s just how people talk. Let them like what they want to like.” First, no. Second, consider the consequences of our imprecision, the knock-on effects. Just like produce uses the language of the factory to erase the artist and his craft, discipline, and hard work, content boils the vast universe of art into whatever porridge-like slurry fills a particular vessel. Doesn’t matter what it is as long as there’s something in there to be consumed. Then you can demand more. And complain that it’s not very good.

Art is unique. Content is interchangeable. Art is irreplaceable. Content is disposable. Art is challenging. Content is numbing. Art strengthens. Content atrophies. Art satisfies. Content addicts. Art demands excellence. Content needs only to be available. Art endures. Content fades, falls apart, and is forgotten. Art is life. Content is death.

I keep wanting to hedge or claim I’m using hyperbole to make a point, but I really mean this. The state of the arts, of creativity, culture, and most especially storytelling, is dire. And the sloppiness with which we talk about producing content rather than making art only makes things worse. It’s a vicious cycle. To take it back to Orwell:

[A]n effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. . . . It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts.

But note that I said “dire” but not “hopeless.” Following that passage from Orwell, the very next sentence is this: “The point is that the process is reversible.”

It starts with each of us and our habits of speech. If you’re an artist, don’t call your work “content.” Call it “work.” Don’t be a “content creator.” If you write stories, paint pictures, compose music, or even make videos, call it that. And insist on it. Whether your work is an a hobby or a profession claim it specifically, as an honorable way to spend your life doing something specific and meaningful. Show yourself and your work some respect.

And if you are only a “consumer” meant to “consume,” get specific again. Don’t talk about, praise, or even criticize “content” any more. Talk about stories, movies, videos, music—all of it, specifically, and what the real people behind art have to do to make it—and don’t consume them, but watch, read, listen to, or even just look at them. In a precise word, enjoy them. That, too, is honorable. For what other reason would an artist make art except for people to enjoy it?

I don’t know how to save all of art or to encourage a new wave of creativity or to save Western Civilization, but speaking and writing precisely and specifically, not settling for content—whether as mechanical producer or gluttonous consumer—will force us to think in new ways about our stories, music, and art and how and why we make them. More precisely, more specifically, and hopefully in the cause of truer, better, and longer lasting art.

Beethoven, art, criticism, and enjoying yourself

From the late Edmund Morris’s Beethoven: The Universal Composer, an excellent short biography for the Eminent Lives series, on the composition Beethoven undertook during a dark period before the premier of his Eighth Symphony in December 1813 and his hugely successful revision of Fidelio:

He went about the task of composing the “Battle Symphony” (known in Germany as Wellingtons Sieg, or “Wellington’s Victory”) with typical professionalism, expanding it to two movements and throwing in “Rule Britannia” for good measure. After scoring it for Panharmonicon, he composed an alternative version for grand orchestra. This enabled him to indulge his love of military field drums, beginning the piece with two enormous rattling crescendos in contrasting rhythms, as if marshaling his aural forces. In the ensuing “battle,” he marked 188 exact cues for cannon fire, with solid dots for British artillery and open ones for French, plus twenty-five musket volleys of precise length and direction, indicated by tied, trilled ghost notes. He synchronized all these salvos with his music so precisely that at the height of the conflict, six cannonades and two musket volleys went off within three seconds.

The “Battle Symphony” commemorates Arthur Wellesley’s victory at Vitoria, the victory that resulted in his elevation to the peerage as the Duke of Wellington. I’ve loved Beethoven since childhood but am by no means a connoisseur, so I had to look the “Battle Symphony” up. It’s wild. In addition to “Rule, Britannia,” it incorporates “God Save the King” as a leitmotif for the British and a French folksong better known as “For He’s a Jolly Good Fellow” or “The Bear Went Over the Mountain.” I’ve embedded von Karajan’s performance in this post; you really have to hear it.

Morris points out several times that critics—actual connoisseurs, unlike yours truly—hold Beethoven’s “Battle Symphony” in pretty low regard. But he also offers this important caveat:

The “Battle Symphony” is by scholarly consensus the worst potboiler Beethoven ever composed, infamous for noise and naïveté. Yet its disparagers ignore that he obviously enjoyed writing it, and that its huge popular success—fanned by Prince Karl Schwarzenberg’s defeat of Napoleon at Leipzig in mid-October—helped pull him out of the Slough of Despond.

Critical consensus matters, as it often does, over enough time, sift what is best from what isn’t, but popular success and pure personal enjoyment matter, too. Sometimes it’s good to remember that. In any art form, if you’re not having fun doing it at least some of the time, why bother?

Bonus trivia or, When Interests Collide: The idea for the “Battle Symphony” came from Beethoven’s acquaintance Johann Nepomuk Mälzel, an inventor who wanted something topical to play on the mechanical orchestra contraption mentioned above. The name sounded familiar to me, and I finally realized where I’d run across it: an early and influential Southern Literary Messenger essay by none other than Edgar Allan Poe, in which he exposes a chess-playing automaton called The Turk that Mälzel had exhibited up and down the East Coast as a hoax. Small world.

My problems with Glass Onion

Benoit Blanc in Glass Onion: “No, it’s just dumb!” Note the literal lampshade.

I’ve mentioned twice now, once in my initial review and once in my 2022 at the movies post, that I had some nagging misgivings about Rian Johnson’s latest Benoit Blanc mystery, Glass Onion. I’ve been mulling those problems over ever since I saw the movie around Thanksgiving and wondering whether I should ever try to work through them in writing. Well, a couple weeks ago my friend Danny Anderson of The Sectarian Review offered a short, pointed critique of the film that I’ve taken as permission and encouragement to do the same.

As I wrote at the time I first saw Glass Onion, I can’t lay out my problems with it without giving too much away, so consider this a spoiler warning.

Briefly, what I most admired and enjoyed about the film when I first saw it was its intricate structure and its humor. I think I mostly stand by that, though what I remember of the humor has somewhat soured on me since I first saw it. We’ll get to my deeper problems momentarily.

In his post, Danny faults Glass Onion for being clever but hollow, for jerking the audience around by offering a mystery without an actual mystery, and for its self-righteous indulgence in pillorying shallow, cartoonish characters.

Any disagreements I have with Danny’s assessment are only in degree, not kind. Or to put it another way, I agree with every point here, albeit with differing levels of intensity. To take these one at a time:

  • Glass Onion’s structure still impresses me, but as other elements of the story have continued to bother me I’ve come to see the film’s fugue-like transparent layers as unworthy of the story it tells. It’s like a perfectly crafted sonnet in praise of cannibalism.

  • I agree completely with Danny about the way the film manipulates the audience. Johnson’s self-awareness, the constant calling of attention to storytelling conventions and what he is doing, goes beyond the tongue-in-cheek or the meta to the pathological. Johnson displays an utter contempt not only for the characters he creates—leading one to ask “Why bother?”—but for his audience. Again, why bother? Is this purely about showing off?

  • Danny’s last criticism, Johnson’s political point-scoring via ridiculous caricature, is where he spends most of his time, and while I agree completely on this point the characters bother me somewhat less because Glass Onion is pretty clearly a farce. For all the music-box intricacy of his plotting, Johnson doesn’t deal in nuance when it comes to human beings. I don’t necessarily like that (note that in my original review I described every character as “annoying”) but I’m willing to give it a pass purely for the sake of the genre.

To these I would add a few more misgivings of my own, some minor and technical but others, like Danny’s most serious complaints, what John Gardner called “faults of soul.”

First, and related to Danny’s point that Johnson continuously plays false with the viewer, Glass Onion breaks some of the classic rules of fair play in a whodunit story—namely Ronald Knox’s ten commandments of detective stories. The film bends or breaks several of these, as you can read about in greater detail here, including artificially withholding important clues. But the biggest and clearest cheat is against rule ten: “Twin brothers, and doubles generally, must not appear unless we have been duly prepared for them.” Compare The Prestige, a film in which twins are part of the mystery’s solution and Christopher Nolan sets this revelation up expertly.

Two possible rejoinders occur to me: First, that rules are made to be broken, a point I’ve made plenty of times myself. Agatha Christie rather famously violated a number of Knox’s rules. And second, Glass Onion is not really a whodunit after all, but a combination revenge story/heist caper. To these I say: Rian Johnson is no Agatha Christie, who could match her mastery of plot and boldness in experiment with genuine compassion and a keen understanding of human nature; and to argue that presenting the audience with a mystery but having it turn out to be something else is just another dodge. And don’t make me bring up “subverting expectations.”

Second, and related to Danny’s argument about political point-scoring, there is Johnson’s obvious and already much commented-upon pandering to leftwing identity politics. But this is so much the norm for Hollywood now that it feels pointless to complain about. (Interestingly, both Danny and I discerned that Johnson’s worldview is shaped entirely too much by the anti-discourse of Twitter, a point that even made its way into the Honest Trailers spoof of Glass Onion.)

But—to use the same note about politics as a jumping-off point—my most serious misgivings always had to do with the climax of the film. When the aggrieved Andi finds her mission of vengeance stymied by Miles Bron and company, she simply starts smashing his collection of glass curios, a spree of vandalism that culminates in a (somehow) non-lethal explosion that destroys Bron’s glass onion house, an act Andi can only top by deliberately destroying the Mona Lisa.

Remember Johnson’s political pandering, and the strawmen he has peopled his film with, and remember as well that Glass Onion takes place in the late spring of 2020, a setting Johnson is not only mining for quarantine and masking jokes. I’d wager that a climax in which injustice is not corrected but simply reacted to with a childish tantrum—by breaking other people’s stuff, setting things on fire, and destroying art—is not coincidental. And I’d argue absolutely that this is an instinct that does not need to be encouraged, much less held up as the satisfying final act of a drama of theft and restoration.

So the more I’ve reflected on Glass Onion, the more it’s struck me as precisely what Danny described in his post: hollow and self-satisfied, slick but contemptuous, a triumph of “precociousness over substance,” and a marriage of political shallowness with irresponsible virtue signaling. And these problems—“faults of soul,” as I mentioned above—originate with the film’s creator.

As so often, Chesterton comes to mind: “A good novel tells us the truth about its hero; but a bad novel tells us the truth about its author.”

Joel Coen on movies vs TV

In my 2022 movie year-in-review I mentioned my exhaustion with TV and my preference for movies. Joel Coen, from a 2020 podcast with longtime cinematographer Roger Deakins on why he and brother Ethan have stuck to movies and not ventured into TV, explains a little of what goes into my preference:

[L]ong-form was never something we could get our heads around. It’s a different paradigm. Not to be shitty about it, but you can look at stories that they have a beginning, middle, and end. But so much of television has a beginning, a middle, a middle, a middle, a middle, until the whole thing dies of exhaustion. It’s beaten to death and then you find a way of ending it.

We’ve all watched TV shows like this. Even some of our favorites fit the arc Coen describes here.

One of the reasons I hope movies and movie theatres survive is that the discipline of the form makes moviegoing better than binge-watching even a good TV show. The discipline of the filmmakers to turn out a compact, well-crafted, self-contained jewel—rather than giving themselves permission, as so many TV showrunners do, to sprawl all over the place—and the discipline of the audience starting a story and not being able to stop it, having to receive it continuously in the form intended by the filmmakers; these are virtues that dissipate in the size and potential aimlessness of a TV series.

There are exceptions, of course, but who has time to find them? And I’ll carve out space for mini-series, which demand some of the same beginning-middle-end discipline as a two-hour drama. Not for nothing is the five-episode Chernobyl and the six-episode The Night Manager the best TV I’ve seen in the last few years.

I’m currently listening to the full Deakins-Coen interview on my commute between campuses. I discovered it and the passage above thanks to this short post from World of Reel.