Matthew L. Call publishes a WSJ article and an academic article on why AI will tend to supercharge superstars. they have an extra advantage in deep domain understanding and expertise and can therefore direct AI in the best ways, often unsuspected by others, as well as recognize exactly what is right (or wrong) with AI output.
###
Professor Call argues that superstars will rocket even further ahead when AI is available. Apply his thinking to the context of law firm superstars. Will they also rocket ahead when AI is available? Apply his thinking (from both articles) to the context, which he does not discuss, of law firm superstars. Assume the law firm superstars benefit mostly from hard skills of some sort (legal skills, brief, argumentation, finding liabilities in the other sides's arguments, finding unexpected support for one's own), rather than soft skills like bullying or schmoozing sales. For this answer, you do not need to address other topics like how to help midrank employees use AI.
##
Applying Matthew Call’s “AI-specific Matthew Effect” to law-firm superstars strongly suggests that they, too, will rocket even further ahead once advanced generative-AI tools become routine in legal work. The mechanisms Call describes—**domain expertise, systematic work habits, early experimentation, and status-based attribution—**map almost perfectly onto the realities of elite lawyers who already dominate on the basis of technical excellence and intellectual leverage rather than charm or salesmanship.
1. Domain expertise as the engine of amplification
Call argues that AI is not a great equalizer but an amplifier of existing expertise. In a law-firm setting, the most analytically gifted attorneys possess deep doctrinal knowledge and a mental library of precedent, analogy, and argument structure. When such lawyers use AI for legal-research synthesis, brief drafting, or precedent comparison, their domain mastery allows them to:
-
Frame far more precise prompts (e.g., “identify Second Circuit cases limiting fiduciary liability under ERISA post-2018”) rather than vague requests.
-
Recognize subtle misinterpretations or hallucinated citations and rapidly correct them.
-
Combine AI outputs with nuanced legal judgment about which lines of reasoning will persuade a specific court.
As Call puts it, experts “extract fundamentally more value (and catch more mistakes)” from AI than generalists. For a top litigator, that means AI becomes a force multiplier—accelerating the mechanics of research while leaving the lawyer free to refine theory and rhetoric.
2. Structured cognition and metacognition
Call observes that stars tend to work more systematically and maintain well-organized cognitive and informational structures. These traits are precisely those that large-language-model tools reward: clear input, modular workstreams, and iterative refinement. A superstar appellate lawyer who already maintains meticulously indexed briefs, argument templates, and citation databases will use AI to extend that structure across thousands of pages of case law in seconds—producing not incremental but multiplicative performance gains.
3. Early adoption and experimentation
Because of their autonomy and confidence, top partners will likely experiment with AI well before firm-wide compliance policies settle. They will build customized prompt libraries, train private models on past filings, and test new argument-generation tools. As in Call’s model, this first-mover experimentation creates workflows that others later imitate but rarely match, locking in an enduring productivity gap.
4. Attribution and the legal version of the “AI-specific Matthew Effect”
Call emphasizes that when AI assistance is invisible, observers attribute success to the individual’s talent if the person already enjoys high status, and to the tool if not. The same psychology governs law firms and courts. When a renowned trial lawyer files a brilliant motion aided by AI research, colleagues and judges will credit insight, not software. A junior associate producing comparable quality might face suspicion that “the computer wrote it.” Thus, status-based attribution bias reinforces hierarchy even when the technical assistance is identical.
5. Cumulative advantage in the legal-services marketplace
Over time these mechanisms create a self-reinforcing cycle:
-
Superstars deliver more work at higher quality.
-
Their reputations attract premium clients and precedent-setting cases.
-
Those experiences provide fresh data with which to train private AI assistants.
-
The gap in knowledge, clientele, and billing widens further.
In economic terms, AI transforms law into an even more pronounced “winner-take-most” market, analogous to the superstar markets Call and others describe in consulting or technology.
6. Bottom line
Yes—law-firm superstars will soar highest in the AI era, not because AI replaces legal reasoning but because it magnifies the productivity and credibility of those who already reason best. Their mastery of doctrine, structure, and judgment allows them to harness AI’s speed and scale without surrendering rigor or originality. As Call would predict, the profession’s top performers will convert generative-AI capability into exponential—not incremental—gains in output, reputation, and market power.
###
Is there any comparables from past technologies? For a good example did some law superpartners actually like to see West legal archive or Patent search materials first hand, rather than leaving them for associates to summarize. That has some parallel to the superpartner (or other superstar) using AI hands on rather than ( as one other than Call might suppose) leaving it to the second-year assistants.
##
That’s a very perceptive analogy—and exactly the kind of historical parallel Matthew Call’s argument invites. His thesis about AI amplifying existing expertise has several close precedents in earlier technological shifts, especially in how top professionals personally used new tools rather than delegating them.
1. Computerized legal research (Lexis, Westlaw, and CD-ROM databases)
When electronic databases appeared in the late 1970s–80s, most partners initially saw them as tools for junior associates or librarians. Yet a subset of high-performing partners—often appellate or patent specialists—insisted on running their own searches.
-
They realized that the true value was not just retrieval speed but interpretive control: seeing the full context, related dicta, and procedural posture instead of filtered excerpts.
-
Those who mastered keyword logic and Boolean connectors could uncover overlooked precedents, analogies, or contradictions that others missed.
-
Within a few years these “hands-on” lawyers routinely produced briefs with more original citations and nuanced argumentation, strengthening their reputations for depth and precision.
That pattern mirrors what Call describes: stars leveraging a new technology’s meta-skills—here, search strategy and contextual interpretation—rather than delegating it.
2. Patent-search digitization and early citation analytics
When the USPTO digitized prior-art databases in the 1990s, the most technically fluent patent attorneys quickly learned to query the system themselves. They no longer waited for research memos from staff; instead, they treated search as an extension of invention strategy.
-
Seeing results directly allowed them to spot faint conceptual overlaps or “design-around” opportunities that assistants, working from instructions, would not recognize.
-
The best practitioners developed personal search heuristics—primitive prompt engineering—that foreshadow today’s iterative dialogue with AI.
Again, the benefit accrued to those with domain depth who experimented early and treated the tool as a cognitive partner.
3. Word processors and the shift from dictation to drafting
In the 1980s, some senior lawyers were notorious for dictating briefs to secretaries. Others began to write and edit electronically themselves. Those early adopters discovered that direct manipulation of text fostered tighter logic, clearer structure, and faster iteration. The same “hands-on” dynamic appears in Call’s framework: stars gain disproportionate benefit when they can immediately test, refine, and re-prompt a system rather than relaying instructions through intermediaries.
4. Data-driven discovery and e-discovery software
During the 2000s e-discovery revolution, most partners delegated review dashboards to litigation-support teams. But a few lead counsel personally explored the ranking algorithms and probability metrics. They learned how small parameter changes altered what surfaced—skills that later made them indispensable in complex, high-volume cases. As with AI today, understanding the system’s strengths and blind spots yielded strategic insight unavailable to those who merely consumed summaries.
5. The through-line
Across all these examples:
-
The technology flattened routine work, but
-
Experts who stayed close to it extracted unique insight, built better mental models of their domain, and strengthened credibility with clients and courts.
Thus, the “superpartner scrolling through Westlaw results at midnight” was not micromanaging—it was performing the same high-leverage cognitive integration that Call now describes as AI complementarity. Just as those earlier tools rewarded curiosity, meta-learning, and interpretive judgment, generative AI will reward the modern equivalent: crafting prompts, evaluating model reasoning, and combining machine output with human strategy.
6. Conclusion
So yes—history supports your intuition. The legal superstars who once preferred to see the West archive or patent corpus firsthand are the direct ancestors of today’s partners who personally probe ChatGPT or Co-Counsel for arguments and analogies. In each wave, technology magnified the performance of those who didn’t outsource the thinking but used the new tool to deepen it—precisely the pattern Call predicts for AI-age superlawyers.
##
Matthew Call in these 2 PDFs argues that superstars will soar highest when AI is available. Specifically what suggestions does he have to help more average employees nonetheless succeed well with AI.
##
Across both the academic article (Human Resource Management, 2025) and the Wall Street Journal essay (Oct. 12 2025), Matthew Call argues that AI will magnify—not narrow—performance gaps between “star” and average employees. However, he also offers clear strategies to help non-stars succeed in an AI-intensive workplace.
🔹 Key Problems He Identifies
-
Stars master AI first, using deep domain expertise and structured work habits to get better results.
-
Status bias (the “AI-specific Matthew Effect”) causes observers to attribute AI-assisted success to talent in stars—but to the tool in average workers.
-
Average employees therefore risk both weaker AI skills and reduced recognition for good work.
🔹 Call’s Practical Suggestions for Organizations
In the WSJ article, Call recommends three concrete steps to help average performers thrive with AI:
-
Encourage broad experimentation.
-
Create “AI sandbox” time for all staff to test tools without fear of mistakes.
-
Pair early adopters with peers for cross-training.
-
Offer AI-literacy programs covering prompt design, evaluation of AI output, and task delegation.
-
-
Spread AI know-how openly.
-
Capture and share effective prompts, workflows, and strategies so discoveries don’t stay with stars.
-
Provide structured templates that teach average employees how to organize work so AI performs optimally.
-
-
Redesign evaluation and credit systems.
-
Require AI-use disclosure to make contributions transparent.
-
Train managers to recognize bias that over-credits stars.
-
Adjust performance metrics to fairly assess AI-augmented work regardless of employee status.
-
🔹 Supporting Ideas from the Academic Paper
In his scholarly framework, Call extends these ideas conceptually:
-
“Personal AI tools” (employee-controlled) raise individual bargaining power, allowing people—especially proactive ones—to create portable, high-value outputs.
-
“Enterprise AI systems” (firm-controlled) can trap average employees in less portable workflows but also offer collective learning platforms if designed inclusively.
-
He urges firms to balance autonomy, transparency, and training so that AI enhances rather than entrenches inequality.
🔹 In Summary
Call’s central message is that AI rewards expertise, structure, and early experimentation, but companies can narrow the gap by:
-
democratizing experimentation,
-
institutionalizing AI-skill sharing, and
-
updating evaluation systems for fairness and transparency.
Together, these steps let more “average” employees capture AI’s benefits instead of watching superstars soar away.
###
###