Sunday, July 20, 2025

On AI: Concurrent NYT and LAT Essays by Writers on AI's Impact

Sunday July 20, 2025

Hollywood's being reshaped by generative AI.  What does that mean for scriptwriters?

Josh Rottenberg

https://www.latimes.com/entertainment-arts/movies/story/2025-07-17/hollywood-tomorrow-jobs-future-screenwriting-ai


###

I teach creative writing.  This is what AI is doing to our students.

Meghan O'Roruke

https://www.nytimes.com/2025/07/18/opinion/ai-chatgpt-school.html


###

On July 20, 2025, the LA TImes and NY Times published concurrent major Sunday paper essays on AI and its uses and impacts on writers. You are an expert on AI and media and journalism, assess these two pieces critically. 

 Summarize each, and compare them to each other. Then discuss weaknesses, implications, and most critical takeaways from the articles. Finally, create a sidebar of ten surprising, important, or unexpected concusions from the two authors.

###

Summary of the Two Essays

New York Times (Meghan O'Rourke): "I Teach Creative Writing. This Is What A.I. Is Doing to Students."
O’Rourke’s essay reflects a deeply personal and pedagogical concern about AI’s creeping influence on student writing, creativity, and cognition. She shares her own experiences experimenting with AI tools like ChatGPT, noting both their utility (in tasks like memos, planning, or basic writing assistance) and their subtle, unsettling impacts on her thinking and self-perception. 

O’Rourke is particularly focused on how students are tempted to surrender the difficult, cognitively enriching process of writing to AI, potentially short-circuiting the development of critical thinking and creativity. She observes a generational shift: students increasingly see AI as a collaborator rather than a tool, blurring lines between authorship and automation. She warns this shift threatens the core humanist values of education—attention, care, and creativity—especially in writing disciplines.

Los Angeles Times (Josh Rottenberg): "Hollywood’s Being Reshaped by Generative AI. What Does That Mean for Screenwriters?"
Rottenberg’s article surveys Hollywood’s divided reaction to AI in screenwriting. He interviews key voices: Billy Ray (staunchly anti-AI), Paul Schrader (enthusiastically experimenting with it), and others like Bong Joon Ho (skeptical but intrigued). 

The piece paints a picture of an industry simultaneously experimenting with and fearing AI. Writers fear displacement, degradation of storytelling, and the erosion of credit and ownership. Yet AI is already present, from drafting scripts to streamlining pitch materials. Institutions like USC are cautiously exploring AI’s educational uses but shield screenwriting from its influence. The guild protections gained in the 2023 strike are framed as insufficient against rapidly evolving technology. AI's limitations—its inability to create meaningful human art—are contrasted with its undeniable utility for formulaic, mechanical tasks.


Comparison

Tone:
O'Rourke's is introspective, academic, and literary; Rottenberg’s is journalistic, industry-focused, with a broader range of voices.

Scope:
O’Rourke centers on education and cognition, particularly writing and humanities in academia. Rottenberg’s focus is Hollywood, labor, and the creative industries’ economics.

Shared Concerns:

  • Erosion of human creativity and authorship.

  • AI’s infiltration of creative processes.

  • The risk of flattening or homogenizing outputs.

  • Anxieties over long-term impacts on institutions (education or Hollywood).

Differences:
O’Rourke centers on the inner life of students and writers; Rottenberg centers on industry, labor rights, and professional futures. O’Rourke warns of the psychological impacts of “thinking with AI.” Rottenberg emphasizes professional displacement and artistic dilution.


Weaknesses

O’Rourke:

  • Occasionally veers into the anecdotal without sufficient engagement with broader data beyond one MIT study.

  • Risks romanticizing pre-AI education without fully considering future adaptations.

Rottenberg:

  • Leans heavily on star voices (Ray, Schrader) without probing deeper into less-publicized industry experiments or data.

  • Relies on familiar dichotomies (tech optimism vs. Luddism) without proposing deeper industry reforms beyond the WGA’s efforts.


Implications

  • Both essays highlight the erosion of human authorship norms.

  • Academia and Hollywood may normalize AI use despite clear harms to creativity and labor.

  • AI will accelerate a bifurcation: writers who write, and prompt engineers who co-author with AI.

  • Creative authority and authenticity may become contested commodities, both legally and culturally.

  • Both fields must urgently define boundaries around AI before norms calcify.


Critical Takeaways

  1. AI’s greatest danger isn’t writing badly—it’s writing “well enough.”

  2. Students and writers are already surrendering cognitive and creative labor to AI incrementally, not overnight.

  3. Hollywood’s guardrails are weak, and studios are quietly advancing AI despite public reassurances.

  4. AI creates outputs mimicking humanity but lacks human interiority; this distinction risks being eroded over time.

  5. Labor protections exist but lag behind the technology’s capabilities and adoption.

  6. Creative industries (education and entertainment alike) face a future where AI co-authorship becomes normalized, even desired.

  7. Ownership of ideas, style, and even “voice” is increasingly at risk of being co-opted by AI models.

  8. Writers’ guilds and universities are diverging in how fast they are integrating AI—and how much they admit it.

  9. There’s a cultural risk of flattening creativity into bland, optimized outputs masquerading as originality.

  10. AI has already entered writers’ workflows more than they admit; the taboo may soon vanish, but clarity about its limits is overdue.