Sunday, April 5, 2026

Los Angeles: What Can the 1920s Tell Us About the 2020's?

“Los Angeles: What Can the 1920s Tell Us About the 2020s?”

Los Angeles feels unusually haunted by its own past because the city still wears, in stone and stucco, the ambitions of the 1920s. Walk through Hollywood, downtown, Koreatown, or the old boulevard corridors.  You keep meeting the 1920s decade again and again: movie palaces, courtyard apartments, ornate office blocks, grand churches, towers, stairways, and façades built with the confidence of a city that believed its future would be bigger than its present. 

That impression is not merely sentimental. A remarkable share of central Los Angeles’s visual identity really does come from the great metropolitan surge of the 1920s, when population growth, speculative real estate, automobile culture, oil wealth, and the rise of the film industry all reinforced one another. (Encyclopedia Britannica)

So what can the 1920s tell us about the 2020s? First, they remind us that Los Angeles has always been a boom-built city. Its character was shaped not by slow continuity but by bursts of reinvention. The 1920s were one of those rare moments when nearly everything seemed to expand at once: neighborhoods, studios, churches, institutions, commercial corridors, and civic self-belief. Buildings from that era still feel charged with momentum because they were erected by people who thought they were standing at the start of something immense. (Encyclopedia Britannica)

Second, the 1920s suggest that Los Angeles thrives when it possesses a convincing story about its own future. That is precisely what feels unstable in the 2020s. The film economy has weakened, with FilmLA reporting a 22.4% decline in on-location filming in the first quarter of 2025. Office real estate has remained under severe pressure; Cushman & Wakefield reported overall Los Angeles office vacancy at 24.1% in Q2 2025. And the mood downtown is troubled enough that the Los Angeles Times recently summarized it in plain words: “DTLA is hurting.” (FilmLA)  From vacant offices to vacant restaurants to crime, it seems dark.

The deeper lesson, then, is not that the 1920s were simply glorious while the 2020s are simply grim. It is that Los Angeles has always depended on imagination as much as economics. The city becomes most itself when it believes in its next act. The poignant thing about Los Angeles now is that it still looks like a boomtown of destiny, while often feeling like a metropolis in retrenchment. The arches, towers, and old façades of the 1920s are more than relics. They are reminders that this city was built before, and at high speed, by people who were certain the future was arriving here first. The question for the 2020s is whether and how Los Angeles can recover that confidence in a new form.

AI Dialog; Comparing DUEL (1969) and THE FALL (2022)

THIS IS MID-DIALOG, WHERE I RAISED THE COMPARISON OF "DUEL" AND "FALL".   For the original start of dialog, see futher below.

###

FALL, about two women trapped on a high desert radio tower, seems to remind me of the Steven Spielberg movie called DUEL because it is a paradoxical mix of claustrophobic and wide open. 

The truck movie takes place in a wide open desert, yet it feels claustrophobic because much of it is filmed inside the car which is very small and always has the truck on its tail in the rearview mirror. So it is a mix of helicopter shots of wide open desert and great claustrophobic terror. Similarly, in the movie FALL, they are high in space and surrounded by nothing but air and sky, and it feels claustrophobic because this is all filmed within a few feet of the actors faces and bodies on the tiny platform.  Contrast and compare.

###

Yes — that is a very sharp comparison, and I think you’ve put your finger on the key formal paradox in both films.

Both Duel and Fall are built on a tension between openness of setting and imprisonment of experience. Duel takes place on open roads in broad daylight across empty desert and scrubland, yet Spielberg repeatedly traps us inside David Mann’s subjective space: the car interior, the windshield frame, the side mirror, the rearview mirror, the speedometer, the panicked glances. Critics and commentators have long noted how much of the film’s suspense comes from POV driving shots and from the truck’s recurring presence in mirrors rather than from any ornate plotting. (citizencant.com)

Fall uses the same contradiction in a vertical form. The women are surrounded by immense empty sky, but that openness does not feel freeing; it feels like the absence of all escape routes. Reviews emphasized exactly this effect: wide and overhead shots establish terrifying exposure, while the film then cuts back into the cramped bodily space of a tiny platform, keeping the suspense close to faces, hands, feet, and the corroded metal around them. (Roger Ebert)

So the two films share a structural trick:

maximum physical openness, maximum psychological enclosure. (citizencant.com)

But they achieve it in slightly different ways.

In Duel, the claustrophobia is created by pursuit. The road is open, but David Mann is never free because the truck occupies his mental field. Even when it is not literally bumper-to-bumper, it invades the frame as menace: in the mirror, in the sound design, in his anticipatory panic. And because Spielberg keeps the truck driver largely unseen, the threat becomes strangely abstract and omnipresent, almost less a person than a system or force. That helps make the desert feel not liberating but exposed and inescapable. (Hagerty UK)

In Fall, the claustrophobia is created not by pursuit but by immobility. Nobody is chasing them. The prison is the platform itself. The enormous surrounding space makes their confinement feel harsher, because every inch beyond the platform is unusable void. In Duel, the hero can keep moving but cannot shake the terror. In Fall, the heroine cannot move at all except within a space of a few feet. That is why the film feels both aerial and suffocating. (Roger Ebert)

There is also a nice contrast in how the films use scale.

In Duel, the truck is huge relative to the car. The visual drama is based on disproportion: one small vulnerable machine harried by one enormous brutal machine. The landscape stays horizontal and the danger keeps arriving from behind or alongside. In Fall, the human body is tiny relative to the tower and the sky. The scale shock is not lateral but vertical. Instead of being dwarfed by another vehicle, the characters are dwarfed by altitude itself. (citizencant.com)

The result is that Duel is about pressure, while Fall is about exposure.

  • Duel: compression, pursuit, tailgating, mechanical aggression. (citizencant.com)

  • Fall: precariousness, vertigo, thirst, bodily vulnerability, dead air around the body. (Roger Ebert)

Another difference is in the emotional texture. Duel is paranoid. The world feels irrational because the attack has no clear motive. Fall is more survivalist and corporeal. Its fear is less “Why is this happening to me?” and more “How long can the body endure this?” That is why Duel feels like a nightmare of modern mobility, whereas Fall feels like an ordeal movie with horror mechanics. (Senses of Cinema)

But your larger point holds beautifully: both films understand that cinematic claustrophobia does not require walls. In fact, sometimes the absence of walls is worse. A tiny car in a huge desert and a tiny platform in a huge sky both become traps because the camera teaches us to feel the characters’ usable world shrinking to almost nothing. (citizencant.com)

So I’d put it this way:

Duel is claustrophobia inside motion.
Fall is claustrophobia inside exposure.

And both rely on the same cinematic paradox: the more open the world becomes, the more imprisoned the protagonist can feel.

If you like, I can also compare Fall to Open Water, which uses almost the exact same formal paradox, but with the sea instead of the sky.

#

#

#
#





ORIGINAL START HERE:

 There was an Indie horror movie a year or two ago about two people, I want to say two girls, who climb a very high tower, several hundred feet high, and then the ladder breaks, and they are trapped up there, and they have no food or water and their cell phones die. If I recall, one girl is rescued at the end.

###

You’re almost certainly thinking of Fall (2022).

It’s about two young women who climb an abandoned 2,000-foot TV/radio tower, the ladder gives way, and they end up stranded at the top with almost no supplies. By the end, one survives and is rescued. (Wikipedia)

A few details in your memory line up especially well:

  • two women

  • very high tower

  • ladder breaks

  • trapped without food/water

  • phone problems

  • one rescued at the end (Wikipedia)

One small correction: it was 2022, so about 3–4 years ago, not just one or two. (Wikipedia)

The tower in the film was inspired by the real Walnut Grove broadcast tower in California. (Wikipedia)

###

Not as an actual event.

The movie’s story is fictional, but it was visually inspired by a real broadcast tower: the KXTV/KOVR tower in Walnut Grove, California, which is about 2,049 feet tall. Director Scott Mann has said the film’s fictional tower was inspired by that real structure.

So the split is:

Real tower: yes.
Real incident of those two women getting stranded there: no, not as the basis of the film.

And one small wrinkle: in the movie, the tower is presented as an abandoned “B67” tower, but the real Walnut Grove tower is an actual communications structure, not that fictional backstory.

###

 How does the movie end? Spoilers are OK. I think one of the girls actually dies in the middle of the movie? But that is not revealed. I understand the film was very challenging because they did not film 1000 feet high but they did film at the top of a fairly high reel tower constructed for the movie.

###

Yes — your memory is basically right.

How it ends:
Hunter appears to survive for much of the second half and keeps talking with Becky, helping her think through survival options. But near the end Becky realizes that Hunter actually died earlier when she fell onto the satellite dish and then down the tower. Becky has been hallucinating her presence because of dehydration, exhaustion, and trauma. (Wikipedia)

After that reveal, Becky uses Hunter’s body in a grim but practical way: she places her phone in Hunter’s shoe/body so it can be dropped farther down, where it finally sends a text and photo message to her father. Rescue then comes, and Becky survives. (EW.com)

On the production side, your understanding is also right. The filmmakers did not shoot at 1,000 or 2,000 feet. Director Scott Mann said they built the top section of the tower on top of a mountain in the Mojave Desert, so the actors appeared to be thousands of feet up while actually being about 100 feet off the ground. They chose that instead of relying mainly on green screen. (Wikipedia)

So the movie’s trick was:

real height enough to terrify the actors, plus mountain elevation to fake extreme altitude. (MovieWeb)

###

I assume the actress had clips for safety, which were taken out in post reduction

###

Yes — almost certainly. They would have been using safety harnesses, rigging, and fall-protection gear, with at least some of it hidden by framing and some of it removed digitally in post. Director Scott Mann said the film was shot largely for real on a tower-top set about 100 feet above the ground, but he also explicitly mentioned “wire paint outs” in postproduction. (Post Magazine)

So the usual formula seems to have been:

real physical height to create genuine fear and body language,
plus hidden safety clips/wires,
plus digital cleanup afterward. (Post Magazine)

That is also why the movie feels so unnerving. It was not just actors on a green screen stage. They were genuinely up on a high practical set, but not recklessly unprotected. Scott Mann described the shoot as mostly practical, with the straight-down shots extended digitally and with some wire removal in post. (Post Magazine)

So your phrase “taken out in post reduction” is basically right — in film language, painted out in post.

###

How was the movie received critically?

###

It was received pretty well, though not reverently.

Critics mostly treated Fall as a lean, effective B-thriller: a film with a somewhat absurd premise that nonetheless delivered real tension and vertigo. On Rotten Tomatoes, the critics’ consensus was that it is “fundamentally absurd” but also a “sustained adrenaline rush,” which captures the overall tone of the response nicely. (Rotten Tomatoes)

The review scores landed in the solid-to-good range rather than the “great film” range. Rotten Tomatoes listed it at 79% with critics, while Metacritic had it at 62, which is generally interpreted as mixed to positive reviews. (Rotten Tomatoes)

The split in opinion was fairly clear. Supportive critics praised its precision, nerve, and sheer suspense mechanics. The Guardian, for example, called it a “wildly effective survival thriller” and admired how much excitement it wrung from such a stripped-down premise. (The Guardian)

More skeptical critics tended to say: good concept, shakier execution. RogerEbert.com called it a “misfire,” arguing that the premise only takes the movie so far and criticizing elements of the acting, editing, and technical execution. (Roger Ebert)

So the fairest summary is:

Critically, it was respected as a tense, clever, efficiently made thriller, but not generally regarded as a major film. The praise was mostly for physical suspense and nerve, while the criticism centered on implausibility, repetition, and uneven execution. (Rotten Tomatoes)

That actually fits the kind of movie it is: a picture people admire for making their palms sweat, even when they roll their eyes a little.

###


The 1948 Freeze on TV Licenses

 I am reading a new biography of Sid Caesar that includes a great deal of detail about television in the 1950s. I understand that there was a major regulatory decision with enormous economic and market consequences for the development of television. As I understand it, there was a freeze on new television station licenses from 1948 until about 1952 or 1953. Is that correct, and why did it occur?

I have also heard that when the freeze ended, roughly 2,000 new television licenses became available, covering perhaps 1,000 different cities. If so, that must have rapidly expanded television into rural areas and smaller markets, bringing in a huge new audience. Presumably this had major effects on the character of television programming itself. It also must have had a dramatic effect on television-set sales and on the broader economy, as television reached many new communities where it could begin to alter consumer habits—for example, keeping people at home on Saturday nights rather than sending them out to restaurants or movies.

In addition, the prospect of putting sets into thousands or millions of new homes must have accelerated competition among manufacturers, encouraging greater investment in research and development in order to gain technical or product-feature advantages in what had suddenly become a much larger mass market.

Please discuss this episode with attention both to the regulatory history and to its wider consequences: for television programming, for the media marketplace, and for the television manufacturing industry, including competition and R&D.

###

Yes—the freeze was real, and it was one of the most consequential regulatory decisions in the history of American television. But the dates in the version you heard are a bit off. The FCC stopped granting new TV station construction permits in September 1948, expecting a short pause; instead, the “freeze” lasted until the FCC’s Sixth Report and Order in April 1952, with processing of new applications beginning July 1, 1952.

The reason was not simple bureaucratic delay. The FCC had discovered that the first wave of TV growth had created a technical and planning mess. Too many stations were being placed too close together, causing co-channel and adjacent-channel interference. The Commission also realized that the existing 12 VHF channels were not enough for a truly national television system. During the freeze it reworked the channel-allocation map, added 70 UHF channels, negotiated border issues with Canada and Mexico, created a permanent framework for educational television reservations, and wrestled with questions around power levels, antenna heights, spacing rules, and color television standards.

The scale of the post-freeze expansion was enormous. The FCC’s 1952 report said the new nationwide VHF-UHF table made available 2,053 channel assignments in 1,291 communities, replacing an older table that had provided only about 400 assignments in 140 metropolitan areas. So the broad thrust of what you were told is right: the Commission moved from a system concentrated in relatively few large urban markets to one that could, in principle, reach much of the country.

That did not mean 2,000 stations instantly appeared in 1953. The FCC itself warned that expansion would be gradual. By the end of fiscal 1953, roughly 90 newly authorized stations had begun operation, and more than 700 applications had been on file when processing resumed. So the regulatory decision was a giant opening of the gate, but the flow through the gate was slower than the raw “2,000” figure suggests.

Still, the economic implications were huge.

First, the freeze had concentrated television in the largest cities. When the freeze ended, the FCC explicitly prioritized bringing a first local station to places with no service and extending TV to communities more than 40 miles from an existing station. That meant television was no longer mainly an urban Northeastern phenomenon. The audience base broadened geographically and socially. Britannica notes that early TV audiences had been concentrated in cities within range of major stations, and that the changing audience throughout the 1950s affected programming.

Second, the station boom changed the network and affiliate marketplace. More stations meant more affiliates, more local ad inventory, more need for audience measurement, and more bargaining over which network got carried in which markets. Arbitron’s history notes that once the freeze was lifted and new stations proliferated, advertisers urgently needed ratings services in many more markets. (MBC)

Third, it changed programming economics and eventually programming style. Before the thaw, live New York-centered television could dominate because the audience was relatively concentrated. After the thaw, the national market became broader and more heterogeneous. That favored programming that could travel more easily across time zones and local schedules and that could be cleared by many affiliates. Historians of the period note that post-freeze station growth, especially with many financially fragile UHF stations, helped make filmed programming more attractive and accelerated changes that weakened the earlier live, New York-heavy “Golden Age” model. (authfilmcentre)

Fourth, it intensified competition among the networks in asymmetric ways. NBC and CBS were best positioned because they already had stronger station relationships and radio-era resources. ABC improved after its 1953 merger with United Paramount Theatres. DuMont, by contrast, was hurt because many post-freeze opportunities were in UHF, and UHF was a bad place to be before all-channel receivers became standard. The FCC itself said there was a “sudden surge of interest in UHF,” but UHF’s economics were much weaker in practice because many receivers still needed converters or modifications, and VHF incumbents had a huge head start.

On the television-set industry, the timing was nearly ideal for a manufacturing boom. TV ownership was already rising rapidly during the freeze, but the end of the freeze expanded the number of places where buying a set now made practical sense. Census history reports that the share of households with a TV rose from 9% in 1950 to 65% in 1955. TIME reported about 17 million sets in use when the freeze ended in 1952; other historical compilations show rapid continued growth through the early 1950s. (Census.gov)

That growth absolutely would have sharpened incentives for R&D and product differentiation, though with one caveat: manufacturers were not innovating only because of the post-freeze station map. They were already in a fast-moving race. But the thaw enlarged the market and made those investments more valuable. The FCC’s 1952 report itself observed that manufacturers were announcing UHF-capable transmitters, and that receiver manufacturers claimed they had solved key UHF reception problems. In other words, regulation did not just expand audiences; it directly stimulated engineering work on tuners, antennas, converters, higher power, and better reception.

And the next layer of competition was already visible: color television. The FCC and industry had spent part of the freeze era wrestling with color standards, and by late 1953 compatible color standards were approved. That meant the market, just enlarged by station expansion, was simultaneously becoming a battleground over black-and-white versus color, cabinet design, screen size, reception quality, and UHF readiness. (Duke Law Scholarship Repository)

Your intuition about broader economic and cultural impact is also sound. As television spread into many more communities, it became a stronger competitor not just to radio but to movies, nightlife, and other out-of-home entertainment. Scholars and historical overviews consistently describe the 1950s as the decade when television drew family entertainment into the home and contributed to declining movie attendance, even though television was not the only cause. PBS’s American Experience summarizes the period by noting that families were buying televisions at about five million a year and that TV became a powerful advertising medium inside the home. (PBS)

So the big picture is this:

The freeze of 1948–1952 was a technical-regulatory pause that turned into a foundational redesign of American television. It replaced a cramped, city-centered VHF system with a much larger national allocation plan that could reach small cities and rural regions. That widened the audience base, changed affiliate economics, boosted advertising and ratings infrastructure, favored scalable national programming over purely local or live big-city programming, and helped turn television-set manufacturing into a mass consumer arms race over reception, screen size, cabinetry, and eventually color.

One final nuance: the FCC created the possibility of a national TV explosion in 1952, but the market absorbed it over several years, not overnight. That is why the decision was so important. It was less a single boom day than the regulatory switch that allowed the 1950s television economy to become fully national.

If you’d like, I can also do a second pass focused just on how the post-freeze expansion changed the style of 1950s programming, including why live urban comedy gradually gave way to filmed series and more middle-American scheduling logic.

###

I grew up in rural Iowa in the 1960s, and my memory is that we could get VHF stations from about 90 miles away, but UHF stations were maybe 30 miles. And this was Iowa, which is pretty damn flat.

###

Yes—your memory is entirely believable.

In the analog era, VHF usually carried farther and more forgivingly than UHF, especially in rural country where you were depending on distant over-the-air signals. Engineering and historical sources note that, for comparable coverage, UHF generally needed much higher power than VHF, and UHF analog reception was also more vulnerable to weak-signal “snow,” receiver noise, and line-of-sight limitations. (Wikipedia)

So a rule of thumb like “VHF from ~90 miles, UHF from ~30 miles” is not crazy at all for 1960s Iowa. The exact numbers would vary with transmitter power, antenna height, your rooftop antenna, weather, and whether the station was on low VHF (2–6), high VHF (7–13), or UHF (14+). But the broad pattern is right: VHF bent and propagated better, while UHF was fussier and more local. (Wikipedia)

There was also a major receiver problem. In the 1950s and early 1960s, many TV sets were poor at receiving UHF or lacked UHF tuners altogether, which is one reason Congress passed the All-Channel Receiver Act of 1962 requiring new sets to receive both VHF and UHF. That law existed precisely because UHF stations were at a serious disadvantage. (Encyclopedia of TV & Radio)

So your Iowa recollection fits both the physics and the consumer technology of the period. Flat land helped, but it did not erase the fact that UHF was the weaker, touchier band in analog broadcasting. (Wikipedia)

A useful shorthand is:

VHF = longer reach, more forgiving.
UHF = more channels, but in the 1950s–60s, harder to receive well.

That handicap was one of the big reasons the FCC’s 1952 post-freeze expansion looked so impressive on paper but did not create equally strong stations everywhere in practice.

If you’d like, I can sketch how this VHF/UHF split helped shape the fate of DuMont, local independents, and educational TV in the 1950s.

###

In general, yes: UHF channel 14 would usually have been somewhat easier to receive than UHF channel 80, all else equal.

Why:

1. Lower UHF frequency propagates a bit better.
Channel 14 sits around 470–476 MHz, while channel 80 was around 866–872 MHz. Higher frequency means more path loss, and FCC engineering materials note that attenuation and receiver/cable losses rise with frequency. (RadioReference Wiki)

2. Receiver and feedline losses were worse at UHF, and worse still toward the top of UHF.
An FCC engineering order discussing TV planning factors notes higher assumed losses for UHF than for VHF, and specifically cites greater coax attenuation at UHF than at VHF. That same logic continues within UHF itself: the upper end of UHF tends to be a bit more lossy than the lower end. (FCC Docs)

3. But the difference was not night-and-day in the way VHF vs UHF was.
The really big handicap in the 1950s–60s was usually “UHF versus VHF,” not “channel 14 versus channel 80.” Within UHF, lower channels had an edge, but transmitter power, antenna height, local terrain, your rooftop antenna, and tuner quality could matter as much or more. (FCC Docs)

So the practical answer is:

Channel 14 was generally preferable to channel 80, but only modestly.
The brutal divide was still VHF good / UHF touchy—and then, inside UHF, lower UHF somewhat better than upper UHF.

A rough engineering shorthand is that channel 80 had on the order of 5 dB more free-space path loss than channel 14, other things equal. That is noticeable, but not a completely different universe. (RadioReference Wiki)

And historically, this is one reason the FCC’s huge 1952 UHF allocation looked better on paper than in living rooms.

###

Yes—a very real difference.

Although channel 13 and channel 14 sound adjacent in numbering, they sit on different bands:

  • Channel 13 was the top of VHF.

  • Channel 14 was the bottom of UHF.

That boundary mattered a lot in the analog era.

Channel 13 occupied about 210–216 MHz, while channel 14 jumped all the way up to about 470–476 MHz. So the dial moved from one numbered channel to the next, but the actual radio frequency more than doubled.

In practical terms, that meant:

1. Different propagation.
VHF channel 13 generally traveled farther and was easier to receive at distance than UHF channel 14, all else equal. Higher-frequency UHF signals suffered more path loss and were more demanding of antennas, tuners, and feedlines.

2. Different equipment behavior.
Early TV sets often handled VHF much better than UHF. Many sets originally lacked UHF tuners altogether, which is why Congress later passed the All-Channel Receiver Act of 1962 requiring new sets to receive both bands.

3. Different business value.
A station on channel 13 was often in a much stronger competitive position than one on channel 14, even in the same market, because viewers could receive it more easily. That VHF/UHF divide was one of the biggest structural facts of 1950s–60s television.

So the numbering is misleading. 13 and 14 were adjacent on the dial, but not adjacent in engineering reality. It was like stepping across a cliff edge from one world to another.

If you’d like, I can lay out the whole old TV dial in a simple way—2–6 low VHF, 7–13 high VHF, 14+ UHF—and explain why that shaped the fortunes of local stations.

###

Some UHF stations may have been unique, but the ones I remember from my childhood were generally simply repeater stations for city-based VHF so there was no attempt to change the programming in anyway.

###

Yes—that rings true, and it is an important distinction.

A lot of the UHF outlets that rural viewers experienced were not “full-blooded local stations” with distinct programming strategies. Many were essentially satellites, semi-satellites, or translators extending the reach of a parent station into places that could not get the main signal well. The FCC defines a TV translator as a station that receives the signal of another television station and simultaneously retransmits it on another channel, specifically to serve areas where direct reception is poor because of distance or terrain. (FCC Docs)

That means your childhood memory fits a real structural pattern: in many rural areas, the purpose of the UHF signal was coverage, not programming innovation. Translators were generally not supposed to originate regular programming; the FCC says they were limited mainly to emergency warnings and tiny amounts of public-service or support announcements, unlike low-power stations, which could originate more content. (FCC Docs)

So when historians say the post-1952 UHF buildout “opened” thousands of communities, that does not always mean those communities instantly got a genuinely local editorial voice. In many places, what they got was access to the same city-based programming, now relayed into smaller or more distant communities. (FCC Docs)

That also helps explain why the programming impact was uneven. In some markets, new UHF stations eventually became independents or local affiliates with their own identity. But in many others, especially thinly populated areas, the economic logic favored relay service rather than reinventing the schedule. The station was there to make the big-city station receivable, not to turn rural Iowa into a separate programming market. That was often the cheapest and most viable model. The persistent economic weakness of many UHF stations is reflected in later industry reporting showing that UHF outlets were much less profitable than VHF stations. (World Radio History)

So I would put it this way:

On paper, the FCC created many new “stations.” In lived experience, many viewers experienced them as signal extenders, not as new cultural worlds. That is fully consistent with your memory.

###

I understood that in the early 1950s skateable was laid, for example from New York to Chicago, to contain network broadcast. It sounds like in the early 1950s, they would’ve had to extend those cables to smaller places like Peoria and Dubuque.

###

Yes—that is basically right, with one important refinement:

They did not necessarily run a fat coaxial cable separately to every small city like Peoria or Dubuque. What they built was a growing interconnection network—first with coaxial cable on some major corridors, and increasingly with microwave relay links—and then added branches, relay hops, and local connections so more stations could be fed live network programs. By 1949 the AT&T television network covered most of the East Coast and Midwest; by 1951 it had become coast-to-coast; and by 1957 the whole country was linked more fully. (Early Television Museum)

So your New York-to-Chicago example is exactly the right mental picture for the backbone. The first full microwave route between New York and Chicago opened in 1950, and the coast-to-coast microwave “skyway” was inaugurated in 1951. But once the FCC thawed station licensing in 1952, the problem became: how do you get live network service from that backbone into many more secondary and tertiary markets? The answer was often not one giant direct cable, but a combination of microwave branches, local interconnection facilities, and sometimes other workarounds. (Wikipedia)

And not every smaller market got the same quality of live feed immediately. Some places remained outside the interconnection plant for a while and continued to get programs by kinescope or later by film delay rather than true live network feed. Historical sources note that kinescope was widely used before broad intercity interconnection and that even after 1951 many regions still relied on delayed distribution for some time. (Early Television Museum)

There was also a third method: off-air pickup. A smaller station could receive the signal of a bigger market station over the air and then rebroadcast it locally, which was a practical way to extend network programming into markets that were not yet fully wired or microlinked into the national system. That method was specifically used to distribute network programming to smaller markets outside the direct range of major centers. (Wikipedia)

So the cleanest way to say it is:

Yes, the national network plumbing had to spread outward after the freeze, but it spread as a hierarchy, not as 2,000 identical direct cable runs. There was a backbone, then regional and local branches, and in some cases delayed or off-air substitutes until full interconnection caught up. (Early Television Museum)

That is one reason the post-1952 expansion was so consequential. The FCC did not just authorize more stations; it effectively created demand for a much denser distribution infrastructure to feed those stations. In economic terms, the freeze-thaw was not only about transmitters and licenses. It also meant more business for the companies that supplied the hidden circulatory system of television—AT&T Long Lines, microwave equipment makers, tower builders, relay engineers, and station interconnection contractors. (Early Television Museum)

And in your two examples, Peoria was the kind of place much more likely to become part of normal network interconnection relatively early, while a place like Dubuque might for some period experience television more as a rebroadcast or edge-market service depending on what regional facilities were available. That difference between being fully “on the live net” and being fed more indirectly mattered a lot in the 1950s. (Wikipedia)

The hidden story of 1950s television is that behind every glamorous live comedy from New York there was an increasingly elaborate map of coax, microwave hops, relay towers, branch circuits, and patchwork local solutions. The freeze ended on paper in Washington, but its real implementation happened out on the landscape.

Chat GPT: Jack Benny at the Supreme Court? (Yes!)

 Direct Chat GPT dialog.

##

“This concerns what seems to me a nearly lost episode in Hollywood legal history. In the early 1950s, The Jack Benny Program did a parody of the film Gaslight. The studio sued and, remarkably, won on copyright infringement grounds. I find that surprising. It is hard to imagine that anyone watching a Benny parody on television would treat it as a serious substitute for buying a ticket to see Gaslight in a theater. Discuss the episode and why the studio prevailed.”

##

Here is a cleaned-up version of your prompt:

“This concerns what seems to me a nearly lost episode in Hollywood legal history. In the early 1950s, The Jack Benny Program did a parody of the film Gaslight. The studio sued and, remarkably, won on copyright infringement grounds. I find that surprising. It is hard to imagine that anyone watching a Benny parody on television would treat it as a serious substitute for buying a ticket to see Gaslight in a theater. Discuss the episode and why the studio prevailed.”

The episode really is a notable legal oddity. Jack Benny’s TV show broadcast a “Gas Light” parody on January 27, 1952, and later prepared another filmed version with Barbara Stanwyck for rebroadcast. MGM/Loew’s objected, sued in June 1953, and obtained a restraining order; the remake was allowed to be filmed but was then impounded while the case proceeded. (Justia Law)

Why did the studio win? In essence, the court applied a much narrower, older view of parody and fair use than modern readers are used to. The federal district court held that a burlesque or parody was not automatically privileged, and that taking too much protected expression from Gaslight could still be infringement even if the result was comic. The Ninth Circuit affirmed, and the U.S. Supreme Court later affirmed by an equally divided Court, which left the lower-court ruling standing but created no nationwide precedent. (Justia Law)

Your instinct is basically the modern instinct: who would confuse a Benny spoof with the Ingrid Bergman film, or skip the movie because of it? Later courts and commentators made much the same criticism. A later Ninth Circuit opinion, discussing parody law, described Benny v. Loew’s as an early case that had been criticized and later overtaken by broader recognition of parody in fair-use doctrine. (Justia Law)

So the case matters less because it represents timeless wisdom than because it captures a transitional moment: television was new, copyright owners were nervous, and courts had not yet become comfortable with the idea that mockery can require imitation. In that sense, the Benny-Gaslight case is a fossil from an earlier copyright era—one in which the law could look at an obvious parody and still say, with a straight face, that the owner of the melodrama had the better claim. (Justia Law)

###

###

Longer format.

###

Yes — this really happened, and it is one of those odd mid-century Hollywood legal episodes that now feels almost surreal.

Jack Benny’s TV burlesque was called “Autolight” (often discussed as a parody of Gas Light / Gaslight). The underlying property began as Patrick Hamilton’s play, later adapted by MGM/Loew’s into the 1944 film. In the trial court, the judge found that Benny’s filmed sketch took substantial material from the movie version: not just the basic idea, but the locale, setting, characters, story points, development, and dialogue closely enough that it counted as infringement. The court also noted that Benny had earlier done a Gaslight radio burlesque, and that this earlier relationship with the material was part of the background to the dispute.

Your instinct is the instinct of a modern viewer: how could a 15-minute comedy spoof possibly substitute for seeing Ingrid Bergman and Charles Boyer in a full feature in a theater? Benny’s side argued essentially that point. They said no one could mistake the burlesque for the original, and that nobody watching Benny’s spoof would treat it as a substitute for buying a ticket to Gaslight. But the courts in the 1950s were not thinking in today’s “transformative use” terms. The district court and Ninth Circuit focused instead on the fact that the sketch copied a substantial part of protected expression and was made for defendants’ commercial benefit without permission.

That is the key to why the case feels so strange today. The Ninth Circuit did not say, in effect, “parody can never matter because market substitution is impossible.” Rather, it treated parody or burlesque as not enough of a legal defense when the taking was this close and this commercially exploited. The Copyright Office’s later fair-use summary of the case puts it bluntly: the court upheld the finding that using substantial portions of the work to create a burlesque was not fair use under the law as it then stood.

So why was Hollywood so exercised? Because this was not just about whether one TV sketch would cannibalize one movie ticket. The trial court explicitly observed that television was in active competition with motion pictures, and the judge took seriously the broader economic stakes around control of film-derived entertainment in a period when TV was rapidly rising and movie attendance was falling. In other words, the studios were not merely protecting Gaslight as one title; they were defending the idea that they, not television comics and sponsors, controlled filmed reuses and adaptations of their screen properties.

That larger industrial context helps explain why the lawsuit looks more aggressive than the actual sketch seems to warrant. A radio spoof had been tolerable; a filmed television spoof was more alarming because film and TV were becoming rival mass-entertainment systems. Later commentary on the episode has emphasized exactly that shift: as television ownership surged and movie attendance declined in the postwar years, the studios had reason to see even comic appropriations as part of a bigger competitive battle.

The case then took an especially tantalizing turn at the Supreme Court. The Court did not produce a real precedential opinion on the merits. Instead, it affirmed by an equally divided Court in 1958, which left the lower-court ruling in place without a substantive Supreme Court opinion. Modern legal scholarship, looking at the Justices’ papers, suggests the Court may actually have been poised to reverse and vindicate Benny’s parody argument before Justice Douglas recused himself. That is part of what gives the episode its “lost chapter” quality: copyright law may have come very close to embracing a more parody-friendly rule decades before modern fair-use doctrine did.

And here is the final irony: Benny eventually did get the sketch on the air. After years of litigation and delay, permission was secured, and “Autolight,” produced in 1953, finally aired on January 11, 1959. TIME, reviewing the long-delayed broadcast, essentially shrugged — after all the legal combat, it “hardly seemed worth all the fuss.” That reaction is almost the perfect coda to your point. The actual program did not look like a plausible replacement for the movie; the fight was bigger than that. It was about control, permissions, medium rivalry, and a pre-modern fair use doctrine that had not yet learned how to talk comfortably about parody.

The biggest modern takeaway is that this case now looks like a relic because later law moved in a different direction. The Copyright Office’s fair-use index specifically notes that the Benny holding was later undercut by Campbell v. Acuff-Rose in 1994, the famous case that gave parody a much more sympathetic place in fair-use analysis. So in a sense, your intuition is closer to modern copyright thinking than to 1950s Hollywood doctrine.

My own bottom line: this was less a rational fear that Benny’s audience would skip Gaslight tickets, and more a studio system border war over who had the right to commercially exploit and mock movie properties in the new television age. If you’d like, I can also give you a mini timeline of the whole affair — 1945 radio parody, 1952/53 filming, injunction, appeals, Supreme Court deadlock, and 1959 eventual broadcast.

###
Wait, are you saying that the episode is filmed in 1953 but never broadcast to 1959? That the litigant could’ve got a court order within a few days between filming and broadcast?

###

Yes — that is basically right.

The best-supported timeline is:

1) There was already a live TV parody in 1952.
CBS aired a live Benny Gas Light burlesque on January 27, 1952. Loew’s immediately protested and told CBS it considered the show infringing. (Justia Law)

2) CBS then prepared a filmed remake in 1953.
The district court says CBS commenced preparation about May 28, 1953. Loew’s learned of it on June 9, 1953, objected that same day, and filed suit on June 10, 1953. A temporary restraining order was then issued. By consent, filming was allowed to go forward, but the film was impounded by court order, preventing broadcast. The program had been planned for the 1953–54 season. (Justia Law)

3) So yes: filmed in 1953, not broadcast until 1959.
A later law review reconstruction, drawing on court records and other historical materials, states that “Autolight,” produced in 1953, finally aired on January 11, 1959, after Benny’s company obtained licenses in fall 1958. TIME also referred to it as Benny’s “1953 Autolight” when discussing the Supreme Court split.

So your instinct about the speed is correct: this was not a case where the show aired and then years later damages were sorted out. Rather, Loew’s moved almost immediately, within about a day of learning of the 1953 filmed version, and got an order that effectively kept the filmed episode off the air while the case proceeded. (Justia Law)

One nuance: the rights-holder did not necessarily swoop in “between filming and broadcast” in the sense of a last-minute weekend ambush. The court record suggests Loew’s acted during production planning / filming, not after the completed film was already sitting on the schedule for imminent broadcast. That is why the court could allow filming to finish but then impound the film before exhibition. (Justia Law)

So the astonishing fact remains: a TV comedy segment made in mid-1953 was shelved by litigation and only shown in January 1959.

The really strange Hollywood-history angle is that this means the public eventually saw, in 1959, a piece of TV comedy that had been created under 1953 assumptions about style, pacing, and topicality. That alone helps explain why later commentators thought the final broadcast hardly seemed worth the years of legal warfare.

###

I asked whether tie votes always result in terse statement that the Circuit court decision is upheld (no commentary).   That is correct.   While they could leave behind the rationale of 4 votes vs 4 votes, they say nothing.

###

Douglas’s recusal. The official Supreme Court record only says that Justice Douglas “took no part.” But later archival research, especially in the Justices’ papers, found a March 11, 1958 memo from Douglas to Chief Justice Warren saying there was a possible conflict of interest and that, even if it seemed remote, he thought he should “lean over backwards” and withdraw. The modern reconstruction is that the conflict involved Douglas’s interest in a possible CBS television deal tied to his book An Almanac of Liberty. A law review study based on the papers argues that this was the real reason for his withdrawal and even suggests the recusal was probably unnecessary or at least mishandled.

Even more tantalizing: before withdrawing, Douglas had already been assigned the majority opinion in what appears to have been a 5–4 vote to reverse the Ninth Circuit. The surviving draft breaks off early, but it clearly starts in a direction more favorable to Benny/CBS and emphasizes that copyright is not absolute property fenced off like “Black Acre,” with public benefit primary. Then Douglas withdrew, the Court re-voted with only eight Justices, and it split 4–4, producing the equally divided affirmance.

As to whether the Court left behind an opinion for each of the four Justices on either side: no, not in the ordinary published sense. There were no published separate opinions in the U.S. Reports. The disposition was just the one-line affirmance by an equally divided Court. But in the archival materials, scholars have found at least:
(1) conference notes showing where several Justices stood before and after the screenings,
(2) assignment records showing Black assigned the majority opinion to Douglas, and
(3) Douglas’s unfinished draft opinion. Those materials let historians reconstruct the likely alignments, but they do not amount to a full surviving set of four majority opinions and four dissents.

So the answer is: we do not have a set of formal hidden opinions from both camps, but we do have enough private papers to see that the Court likely leaned toward reversal before Douglas stepped aside. That is what makes the case so fascinating.

The likely lineup, based on the archival reconstruction in that article, was: initially Black, Douglas, Brennan, Burton, and Clark for reversal, and Warren and Whittaker for affirmance, with Frankfurter and Harlan initially withholding judgment until they viewed both works; after the screening, Frankfurter and Harlan apparently moved to affirm, creating a 5–4 reversal before Douglas withdrew.

###

###

When the Court actually disposes of a case as a 4–4 tie, the public-facing result is typically just a very short per curiam statement such as: “The judgment is affirmed by an equally divided Court.” That is exactly what happened in Benny, and it is also what happened in more recent tie cases like Friedrichs v. California Teachers Association (2016), United States v. Texas (2016), Washington v. United States (2018), LeDure v. Union Pacific (2022), and the Oklahoma charter-school case in 2025.

So the answer to your practical question is: the Court does not ordinarily publish a pair of rival 10- or 20-page opinions when it ties. The whole point of the tie disposition is that there is no opinion of the Court and no national rule of law announced; the lower-court judgment simply remains in place for that case.

###

That said, there are two important nuances.

First, the Court is not forced to end with a tie. It can instead choose to reargue the case later, often after the Court returns to nine members. That is an available institutional option, though not one the Court always uses.

Second, although the published disposition in a tie case is usually just that one-line affirmance, historians may later find draft opinions, conference notes, or circulation memos in the Justices’ papers. That is what happened in the Benny case: the public saw only the one-line tie affirmance, but archival research later uncovered Douglas’s recusal memo and an unfinished draft opinion. Those papers help reconstruct what may have happened internally, but they are not part of the Court’s formal precedential output.

So, in the ordinary public record, yes: a tied Supreme Court usually leaves behind only a terse one-sentence affirmance, not paired explanatory opinions.

One related wrinkle: if the Court lacks a quorum, federal statute can require an affirmance with the same effect as an equally divided Court, and the Court likewise issues a short order rather than full merits opinions.

###


Freud: The Interpretation of Memes (2026)

 

Review of The Interpretation of Memes, by Sigm. Freud
American Journal of Psychoanalysis

Dr. Freud’s new volume, The Interpretation of Memes, will surprise readers who know him chiefly through his earlier and already controversial studies of dreams, slips, symptoms, and the subterranean life of wishes. In this latest work, he extends his method to that most fugitive and modern of psychic productions: the meme. One had thought the joke postcard, the caricature, and the café anecdote sufficient trivialities for science. Dr. Freud, with characteristic boldness, has decided otherwise.

His central contention is that the meme, however foolish its outward appearance, is not mentally cheap. It is a compact formation, economically constructed, in which displacement, condensation, symbolic substitution, and disguised wish-fulfillment perform their acrobatics at great speed and with remarkable thrift. The dream required a night; the meme requires only a caption.

The reader will encounter many memorable analyses. A photograph of a cat paired with the words “I can has…” is treated not as nonsense but as a compromise formation between infantile desire and social permission. The endlessly recurring train entering the tunnel is handled with the restraint one expects from Dr. Freud, though the reader may feel that the tunnel, like certain cigars, is allowed only the briefest hope of innocence. Particularly ingenious is his chapter on repeated images of distracted boyfriends, pointing, in Dr. Freud’s account, not merely to indecision but to the ego’s humiliating inability to keep faith with its announced attachments.



As in The Interpretation of Dreams, the style is at once methodical, argumentative, and oddly comic, especially when Dr. Freud patiently dissects absurd specimens that lesser minds would dismiss with a laugh. He remains a scholar of suspicion, unwilling to grant even the crudest visual jest its alibi of mere amusement. The meme, he insists, is not unserious because it is ridiculous; it is ridiculous because it has approached something serious too closely.

This book will not, one suspects, persuade everyone. Some will object that Dr. Freud has over-endowed the human mind, finding Oedipus in every reaction image and repression in every poorly cropped photograph. Yet one closes the volume with the uncomfortable conviction that he is right at least often enough to spoil the reader’s future innocence. After this, no one will glance at a meme without wondering what, precisely, has been smuggled past the censor.



The Prisoner (1967) Revisited: The Star They Cut

 

ROVER: THE STAR THEY CUT
A three-part Netflix event

There are beloved cult series, and then there is The Prisoner: the kind of television object that does not merely attract fans, but breeds cartographers, codebreakers, and retired dentists with filing cabinets full of theories. For decades, viewers have returned to its 17 episodes, its Village, its badges, its bubbles of paranoia, its decorative absurdity, and above all its teasing conviction that some final master-key must exist. Why does the series feel so complete in mood yet so incomplete in meaning? Why does one come away not with the sense of having missed an answer, but of having missed a larger program that once surrounded the answer like a missing building around a surviving doorway?

Into this breach steps Professor Chester Featherstone, media studies chair at the University of New Hampshire, a man with elbow patches, a ruinous filing system, and the calm suicidal confidence required to tell Prisoner fandom that it has been looking at the wrong protagonist for nearly sixty years. Featherstone’s thesis, delivered in the first ten minutes with the kind of grave, cello-backed certainty now mandatory in prestige documentaries, is simple: Patrick McGoohan was not originally the star of The Prisoner. Rover was. The giant white inflated ball. The pale, wobbling, idiotic, extraterrestrial beach bladder of social control. According to Featherstone, Rover was not a recurring device but the emotional and narrative heart of the original production.

The two-hour documentary unfolds across two episodes. In the first, talking heads—television historians, surviving crew relatives, one unhelpfully ecstatic balloon engineer from Surrey, and several fan-scholars who visibly resent Featherstone—revisit the familiar lore. Yet each clip is now recut around Rover’s strange prominence. What had once seemed comic punctuation begins, through the manipulations of expert testimony and ominous editing, to look suspiciously like character business. Why does Rover enter with such assurance? Why does the camera sometimes linger on its retreat with what can only be called melancholy? Why, in one freeze-frame Featherstone insists on revisiting seventeen times, does McGoohan appear to wait for Rover’s cue before turning?

By episode two, the theory becomes gloriously deranged. Featherstone produces call sheets suggesting 24 episodes, each roughly 80 minutes, before “network interventions” reduced the series to a more saleable 17. Studio executives, we are told, panicked at the difficulty of launching an action-philosophical series whose lead was “a mute pneumatic orb.” 

They ordered emergency post-production surgery, promoting McGoohan’s Number 6 into a more legible and masculine center and emphasizing what Featherstone calls “the penumbra of sexual voltage” between him and the various women in the Village—women who now appear, in this reading, less as characters than as hastily inserted decoys to distract from the missing Rover story arc. A former BBC accountant, seen in archive footage from 1989, says only, “There was concern the balloon read too strongly.”

The pièce de résistance is the recovered material: a few seconds here, a minute there, all supposedly from the lost Rover cut. In one fragment, Rover appears alone on the Village Green at dusk while an unused orchestral cue swells almost heartbreakingly. In another, Number 2 seems to address it directly: “You’ve always known why he resigned.” The effect is ridiculous, persuasive, and somehow moving—the ideal Netflix combination.

Then comes the third installment: a one-hour dramatic supplement presented without explanation as an actual missing 1967 episode. Grainy, mannered, fully committed, it is titled something like “Many Happy Returns of the White Sphere.” Number 6 infiltrates what may be the BBC, or MI5, or a set dressed by men who had heard of both only in passing. Files are rifled. corridors are paced. Telephones ring with no one speaking. Everywhere he finds evidence of tampering: splicing notes, memos about “humanizing the inflatable asset,” publicity stills cropped to remove Rover from the foreground. Finally, in a locked vault, he uncovers a canister labeled ROVER—PRINCIPAL CLOSEUPS.

He turns. Rover is already there.

No explanation follows. None is needed. The great satire of the pitch is that cult television always tempts us to believe not merely that there is a secret, but that the secret, once found, will be both ludicrous and perfect. Featherstone’s theory is insane. It is also, in the way of all durable fan theories, a little too satisfying. Suddenly the old series’ nagging void has a shape: round, white, elastic, and faintly tragic.

Netflix should buy this immediately, before somebody at Criterion gets there first.

###

###


Friday, April 3, 2026

New Thinking on Parkinson's Disease: The Somatocognitive Framework

 In 2006, Parkinson's expert William Langston published his article, Parkinsonism: Tip of the Iceberg, framing PD as a divese brain (and autonomic) disorder.

Three new articles expand on today's "somatocognitive" framework.

Medscape here.   Lead article in Nature here.   News article in Nature here

###
AI CORNER
###

Here is a standalone blog-style article.

The Somato-Cognitive Theory of Parkinson’s Disease

For most of the modern era, Parkinson’s disease (PD) has been framed primarily as a movement disorder: tremor, rigidity, bradykinesia, gait difficulty, and impaired movement initiation. That description is not wrong, but it has long been incomplete. PD also includes constipation, blood pressure dysregulation, REM sleep behavior disorder, apathy, executive slowing, and other non-motor symptoms, many of which can appear years before the classic motor syndrome. That mismatch has always created a conceptual problem: why should a “movement disorder” so consistently involve autonomic, sleep, motivational, and cognitive domains?

A new answer has emerged from recent network neuroimaging. In this view, PD is not best understood as damage to isolated limb-movement regions, but as dysfunction of a broader integrative circuit called the somato-cognitive action network, or SCAN. The core claim is that PD is a disorder of the neural machinery that links intention, arousal, autonomic state, and whole-body action. In this framing, the motor syndrome remains central, but it is no longer the whole story; it becomes the most visible expression of a wider circuit disorder.

What is SCAN?

The SCAN concept grew out of a revision of the classic Penfield picture of motor cortex. For decades, the cortical motor strip was taught as a fairly orderly map of body-part “effectors”: foot, hand, mouth, and so on. More recent precision functional imaging suggested that this account was incomplete. Between the classic effector zones are inter-effector regions that do not seem to represent a single body part. Instead, they connect with systems involved in goal-directed behavior, arousal, autonomic control, and whole-body coordination. These inter-effector regions, taken together, form the SCAN.

That matters because PD is conspicuously not an effector-specific disease. It does not merely weaken one limb program or one focal motor output. It alters gait, posture, initiation, vigor, coordination, autonomic tone, and the tempo of voluntary behavior. The 2026 Nature paper explicitly argues that the SCAN is well positioned to explain this broader phenotype because it is thought to coordinate arousal, organ physiology, whole-body motor plans, and behavioral motivation.

The central mechanistic claim: hyperconnectivity

The strongest empirical claim in the new model is not merely anatomical but physiological. In a large multimodal dataset of 863 individuals across 11 cohorts, investigators found that the major subcortical nodes implicated in PD — including the substantia nigra, subthalamic nucleus, globus pallidus, ventral intermediate/centromedian thalamus, GPe, and putamen — are functionally connected more strongly to the SCAN than to the classic effector-specific motor regions. In PD, the connectivity between SCAN and these subcortical nodes was abnormally elevated, a pattern described as hyperconnectivity.

The argument is therefore not simply that the SCAN exists and PD patients have symptoms consistent with it. It is that PD shows a specific network abnormality: excessive SCAN–subcortical coupling. Importantly, the study reports that this hyperconnectivity was specific to the SCAN rather than to effector motor cortex, and that it correlated with symptom severity. The Nature research briefing also notes that this abnormal coupling was not seen in the same way in comparator disorders such as dystonia and essential tremor.

This is the pivot from description to theory. The somato-cognitive view says that PD symptoms arise because the circuit that should flexibly translate motivation and bodily state into action becomes pathologically over-coupled with subcortical nodes, effectively jamming normal voluntary control. That formulation is still a model, not a settled fact, but it is more specific than the older broad statement that PD “involves more than dopamine.”

Why this theory feels different from older “PD is more than motor” arguments

A key point is that the general idea that PD extends beyond movement is not new. Langston’s 2006 essay, The Parkinson’s Complex: Parkinsonism Is Just the Tip of the Iceberg, already argued that parkinsonism represented only the clinically obvious portion of a broader disease process. He emphasized early and non-motor features such as constipation, hyposmia, REM sleep behavior disorder, daytime sleepiness, and autonomic pathology, and argued that these were not merely “risk markers” or “preclinical” hints but parts of the disease itself. He drew on neuropathology showing Lewy-related involvement in lower brainstem, olfactory structures, sympathetic ganglia, and myenteric plexus, and he criticized an overly nigrostriatal view of PD.

In that sense, the somato-cognitive theory is partly a modern network realization of an older clinical intuition. Langston’s article was already saying, in effect, that the textbook motor syndrome badly under-described the real disease. What is new in 2026 is that this older “iceberg” idea is no longer only phenomenological or pathological. It is being reformulated as a specific systems-neuroscience model with candidate nodes, measurable connectivity, and therapeutic implications.

Why the theory is attractive

The SCAN framework is attractive for several reasons.

First, it offers a single organizing principle for the coexistence of motor and non-motor symptoms. If one network participates in whole-body action, autonomic regulation, arousal, and behavioral motivation, then constipation, orthostatic symptoms, REM sleep behavior disorder, gait freezing, bradykinesia, and apathy need not look like unrelated add-ons. They can be read as different surfaces of one disturbed action-regulation system.

Second, it helps explain why such different treatments can all work. Levodopa, deep brain stimulation, focused ultrasound, and TMS are mechanistically quite different interventions. Yet the 2026 work argues that effective versions of all of them share a common network effect: they reduce SCAN–subcortical hyperconnectivity. That is a powerful unifying claim. It suggests that one may be seeing not four unrelated therapeutic stories, but several ways of reaching the same dysfunctional circuit.

Third, the model may improve neuromodulation targeting. The paper reports that approved DBS targets were all more strongly connected to the SCAN than to effector motor cortex, that lesion proximity to a thalamic SCAN “sweet spot” predicted better focused-ultrasound outcome, and that in a TMS study, targeting the SCAN rather than effector motor regions produced roughly double the improvement on a standard motor scale.

The deeper conceptual shift: from movement to action

The most interesting intellectual move in this theory is linguistic as much as anatomical. The investigators and commentators increasingly describe PD as an action disorder, not merely a movement disorder. That sounds semantic, but it is actually substantive. “Movement” implies output. “Action” implies the coupling of goal, internal state, autonomic readiness, cognitive set, and motor implementation. In ordinary clinical life, PD already behaves more like an action disorder than a mere movement disorder: performance worsens under cognitive load, improves with external cueing or music, and can sometimes transiently normalize in emergencies. Those phenomena fit awkwardly with a simple effector-deficit model but fit more naturally with a disorder of action selection and implementation in context.

This may also explain why the theory resonates with longstanding clinical observations without claiming that everything in PD is “cognitive.” The phrase somato-cognitive is important because it does not collapse the disease into a psychology-of-motivation narrative. It points instead to a circuit where bodily state and cognitive intent are integrated. In other words, the claim is not that PD is secretly a cognitive disorder masquerading as a motor one. The claim is that action itself is built from both somatic and cognitive ingredients, and the affected network sits precisely at that junction.

What, exactly, is new in the 2026 work?

Three things stand out.

One is scale. The Nature study did not rely on a single small imaging cohort; it combined multiple cohorts and multiple intervention modalities, creating a broader evidentiary base than many prior PD network papers.

Another is convergence across therapies. It is one thing to show a disease-associated imaging abnormality. It is more persuasive to show that when different effective treatments work, they all move that abnormality in the same direction. The SCAN hypothesis gains force because levodopa, DBS, TMS, and MR-guided focused ultrasound were all interpreted through the same circuit lens.

The third is translation. The paper does not stop at explanation; it proposes that functionally defined SCAN targets might improve current therapies and perhaps support new minimally invasive cortical stimulation strategies. It even raises the possibility of personalized cortical or epidural/subdural targeting after SCAN mapping.

How this fits with older pathology-based models

The somato-cognitive theory does not replace the classic neuropathology of PD. It sits on top of it. The 2026 paper still begins from the accepted hallmark of dopaminergic neuron degeneration in the substantia nigra and dysfunction of the broader cortico–basal ganglia–thalamic circuit. Its innovation is to say that the cortical component of that circuit is better described by SCAN than by generic “primary motor cortex” or by a foot-hand-mouth effector map.

Likewise, it does not negate Braak-style or Langston-style accounts of early involvement of olfactory, autonomic, and lower brainstem structures. Rather, it offers a new cortical-systems account of why the syndrome looks so distributed in life. Langston’s 2006 argument that PD was a far broader disorder than visible parkinsonism now finds a plausible network companion: the brain system coordinating body physiology with goal-directed action may be involved more centrally than older models assumed.

The theory’s current limits

For all its appeal, the somato-cognitive theory is not settled doctrine.

A first limitation is that the clinical non-motor story remains more inferred than demonstrated. The research briefing itself says that investigation of sleep, autonomic, cognitive, and affective symptoms remains limited and is an important next step. In other words, the theory is highly plausible for these domains, but the current evidence is strongest for the motor and neuromodulation findings.

A second limitation is reproducibility and generalizability. Outside experts quoted in Medscape called the work promising but early, emphasizing the need for replication across sites, scanners, acquisition pipelines, and academic centers.

A third limitation is biomarker specificity. The Medscape discussion notes skepticism that SCAN hyperconnectivity can yet cleanly distinguish PD from all competing disorders in real-world practice. Imaging biomarkers in movement disorders face a very high bar because diagnostic overlap is common, especially early in disease.

A fourth is that even if the imaging is robust, it is primarily a model of circuit dysfunction and symptomatic treatment, not yet a model of disease modification. Reducing hyperconnectivity may improve symptoms without necessarily arresting alpha-synuclein biology or neurodegeneration.

And a fifth is methodological: the TMS result, while intriguing, reportedly lacked a sham control arm, which weakens causal confidence.

So how should the theory be judged right now?

The most balanced reading is that the somato-cognitive theory is not a replacement for everything known about PD, but it may be the most coherent recent attempt to unify several stubborn facts:

  • PD is clinically broader than a motor syndrome.

  • The cortex involved in PD may not be best described by classic effector motor maps.

  • Different successful treatments may converge on a common dysfunctional network.

  • Future neuromodulation may improve if targeted to that network rather than to generic motor cortex.

That does not yet make SCAN the final answer to PD. But it does give the field a sharper hypothesis than the old fallback phrase that “Parkinson’s has motor and non-motor symptoms.” The new theory says more: those features may belong together because the disease disrupts a circuit whose normal job is to join somatic state, cognitive set, and whole-body action.

Bottom line

Twenty years ago, Langston argued that the field had been looking at PD through the “somewhat myopic lens” of visible parkinsonism and missing the much larger disease iceberg beneath it. The 2026 SCAN work can be read as a sophisticated update of that intuition. The older literature established that PD was broader. The new literature tries to show where and how that breadth is organized in brain circuitry.

Whether this theory becomes foundational will depend on replication, better linkage to non-motor syndromes, and proof that SCAN-guided interventions improve outcomes in larger controlled trials. But as a conceptual advance, it is already important. It shifts PD from being thought of mainly as a disorder of movement output to being considered, more fundamentally, a disorder of embodied action.

I can also turn this into a shorter DIHP-style blog version in your usual paragraph format with bold lead-ins.