In this humorous essay by Chat GPT, we meet the following essays;
1) Business pitch for $100M investment in COBOL-based AI.
2) A VC business associate evalutes this proposal - for a partner who likes it.
3) Two years later, they now want $1B for Series B. We evaluate again!
---------
4) A "pre-reading" essay by Chat GPT - I asked Chat to discuss "an LLM based on COBOL."
Here's the original input-output with prompts.
https://chatgpt.com/share/6965aed2-e6ac-8005-b687-6b55d5cf01e2
####
####
ESSAY ONE: PITCH FOR $100M FOR COBOL AI
####
Below is a deliberately straight-faced, venture-grade business plan for something that is obviously a little mad — but in the way Silicon Valley often is just before it works.
IronMind AI
A COBOL-Native Large Language Model for the World’s Mission-Critical Systems
Executive Summary
IronMind AI is building the world’s first COBOL-native Large Language Model (LLM) execution environment, designed to bring generative AI directly into the heart of the world’s most mission-critical infrastructure: banks, insurers, governments, airlines, and healthcare payers that still run on COBOL.
Over $3 trillion in global economic activity per day depends on COBOL systems that cannot be easily migrated, modernized, or exposed to cloud-native AI. These systems remain locked in batch jobs, flat files, VSAM records, and green-screen transaction monitors that are invisible to modern AI platforms.
IronMind bridges that gap. We allow COBOL systems to call, embed, reason with, and govern LLMs using COBOL-native semantics — records, jobs, transactions, and audit trails — rather than forcing enterprises to rewrite their core systems.
We are not building “a COBOL model.”
We are building the AI runtime for the world’s financial nervous system.
We are raising $100 million to build IronMind’s compiler, runtime, inference layer, and enterprise distribution platform.
The Market Opportunity
There are still over 200 billion lines of COBOL running today. They execute:
-
95% of ATM transactions
-
70% of global financial transaction volume
-
Core systems for Medicare, Social Security, airline reservations, and national tax systems
These organizations want AI, but they cannot:
-
Move sensitive workloads to the public cloud
-
Expose production data to uncontrolled black-box models
-
Replace COBOL systems without catastrophic risk
The result: the most valuable data in the world is AI-dark.
IronMind makes it AI-literate.
This is not a small niche — it is the core of the global economy.
The Product
IronMind delivers a COBOL-native AI runtime that allows LLMs to be executed, governed, and audited from within COBOL programs.
What makes IronMind unique
We do not ask enterprises to “send data to an AI.”
We embed AI into their existing computational fabric.
IronMind provides:
-
COBOL-Callable LLM APIs
Developers can invoke AI with simple COBOL verbs: -
Record-Based Prompting
IronMind converts COBOL data structures (PIC, OCCURS, REDEFINES) into structured prompts, ensuring no hallucinated fields and deterministic mapping back into COBOL records. -
Deterministic Execution Modes
IronMind supports:-
Audit-grade deterministic inference
-
Regulated randomness
-
Replayable model execution
-
-
On-Prem and Air-Gapped AI
The model runs next to the mainframe or z/OS environment, not in a public cloud. -
Explainability for Regulators
Every inference is logged, replayable, and attributable — critical for banking, healthcare, and government.
The Technology
IronMind uses a hybrid architecture that turns COBOL from a limitation into a strategic moat.
Core idea
COBOL will not perform matrix multiplications.
Instead, we build:
-
A COBOL-native AI compiler
-
A runtime engine
-
A GPU inference backend
-
A data-safe orchestration layer
COBOL controls:
-
What data can be used
-
Which model may be invoked
-
What the output may influence
-
How the result is audited
The LLM becomes a subroutine of the mainframe.
This gives us something modern AI lacks:
institutional trust.
Why COBOL is an Advantage
COBOL is not slow.
COBOL is careful.
Its strengths:
-
Strict data schemas
-
Explicit data lineage
-
Deterministic batch processing
-
Institutional stability
These are exactly what regulators, banks, and insurers require from AI — and what modern Python-first LLM stacks do not provide.
IronMind gives enterprises something they cannot get from OpenAI, Anthropic, or open-source models alone:
AI that fits their governance model.
Competitive Landscape
| Player | Strength | Weakness |
|---|---|---|
| OpenAI, Anthropic | Powerful models | Cannot integrate with mainframes or meet regulatory constraints |
| Cloud AI platforms | Scalable | Cloud-only, data-sovereignty risks |
| Legacy system integrators | Access to COBOL | No AI expertise |
| IronMind | AI + COBOL runtime | New category |
We are not competing on model quality.
We are competing on where AI is allowed to exist.
Business Model
IronMind sells:
-
Enterprise licenses per mainframe
-
Per-inference pricing
-
Regulatory compliance modules
-
Premium models trained for finance, healthcare, and government
Our customers already pay tens of millions per year to keep their COBOL systems alive. We become part of that budget.
Go-to-Market
We sell through:
-
IBM mainframe partners
-
Accenture, Capgemini, TCS, Cognizant
-
Direct enterprise deals with banks and insurers
Our wedge use cases:
-
Claims adjudication
-
Fraud review
-
Customer service
-
Regulatory reporting
-
Legacy code modernization
Risks and Workarounds
Risk: COBOL is obsolete
Reality: COBOL is immortal because it is too expensive to replace.
Risk: LLMs are nondeterministic
We enforce deterministic modes and replayable inference.
Risk: GPU integration with mainframes
We use secure side-car inference nodes with hardened interfaces.
Risk: AI hallucinations
We use record-constrained generation — outputs must conform to COBOL data structures.
Why This Can Be a $10B Company
IronMind becomes the AI operating system for the world’s financial, healthcare, and government cores.
Every transaction that touches money, health, or law flows through COBOL.
We don’t replace it.
We make it intelligent.
Closing
COBOL still runs the world.
AI is about to decide how it runs.
IronMind is where those two futures meet.
####
####
ESSAY 2: EVALUATING $100M FOR SERIES A
####
Internal First-Read Memo
Project: IronMind AI — COBOL-Native LLM Runtime
From: [Associate, Technology Group]
To: [Entertainment & Media Partner]
I read the IronMind proposal carefully, and I want to start by saying that I understand why it resonated with you. It has something that many technical AI pitches currently lack: a strong narrative anchor. “AI meets the world’s financial nervous system” is a compelling story, and it speaks to real anxiety in large institutions about being left behind by generative AI while still being trapped inside legacy systems. From a positioning standpoint, this is unusually sharp. Most enterprise AI companies describe features; this one describes a civilizational bottleneck, which is exactly the kind of framing that supports a large venture outcome.
On the substance, the founders are correctly identifying a genuine structural tension in the market. The core of global finance, insurance, government payments, and healthcare claims still runs on COBOL and adjacent mainframe ecosystems. These systems are not going away, and they are precisely the systems where generative AI could create enormous economic value — automating reviews, summarizing records, adjudicating exceptions, detecting fraud. In that sense, IronMind is directionally aligned with what we are seeing across every CIO conversation: “We want AI, but we can’t move our crown-jewel data.” Framing COBOL not as a dinosaur but as a governance layer is actually quite clever.
Where the proposal becomes more delicate is at the boundary between metaphor and engineering. “COBOL-native LLM” is doing a lot of conceptual work. In practice, as the plan quietly acknowledges, the intelligence will live in GPUs running modern inference stacks, while COBOL orchestrates, constrains, and audits. That is not wrong — in fact it is probably the only viable architecture — but it means that IronMind is less a new AI engine and more a middleware platform that makes modern AI safe for legacy environments. That is still potentially valuable, but it shifts the competitive landscape from “we invented a new kind of model” to “we are an integration and control plane,” which is a different type of company with different scaling dynamics.
This is not necessarily a negative. Some of the most durable enterprise software franchises (SAP, Oracle, ServiceNow) were built not on revolutionary algorithms but on being the system of record or system of control for complex organizations. IronMind is implicitly pitching itself as the “AI system of record” for mainframe worlds. If they can really become the layer through which regulated institutions are willing to let AI touch money, health, or law, that could indeed justify very large outcomes.
At the same time, the go-to-market and defensibility here are subtle. Large banks and governments already buy from IBM, Accenture, TCS, and similar firms that sit directly in the COBOL ecosystem. Those firms could, in principle, build or bundle something similar if the demand proves real. IronMind’s differentiation would have to come from execution, trust, and regulatory credibility more than from pure technology, which is harder to assess at this early stage but not inherently impossible.
Stepping back, I think what you responded to — and what the pitch genuinely has — is a kind of mythic plausibility. It takes something everyone assumes is dead (COBOL) and reframes it as the gatekeeper to the future. That’s a powerful story, and sometimes those stories turn out to be truer than the more obviously “rational” ones. Whether it becomes a category-defining platform or a very well-positioned enterprise tool probably depends less on the elegance of the idea than on whether the team can persuade conservative institutions that IronMind is the safest way to let AI inside their walls.
In that sense, I would say this: the proposal is neither obviously quixotic nor obviously inevitable. It sits in that interesting middle zone where a strong narrative, a real market tension, and a nontrivial technical bet overlap. Those are often the deals that look strange early and clear only much later — which, depending on one’s appetite for that kind of ambiguity, can be read in more than one way.a
####
####
ESSAY 3: EVALUATING $1B FOR SERIES B
####
Internal Investment Committee Memorandum
Project: IronMind AI — Series B Expansion Round ($1B)
From: [Associate, Technology Group]
To: [Entertainment & Media Partner]
Two years in, IronMind AI occupies an interesting and, in some respects, familiar place in the venture lifecycle: it is neither the clean breakout success nor the obvious write-off. It has, however, crossed a number of technical and commercial thresholds that most enterprise-AI startups do not reach, particularly in a domain as conservative and unforgiving as the mainframe ecosystem. That alone explains, I think, why the story continues to resonate with you — and why it continues to attract curiosity well beyond the Valley.
On the positive side, the company has demonstrably built something real. The IronMind COBOL runtime now runs in production-adjacent environments at three Tier-1 banks and two national insurance carriers. While these are still classified as “limited deployment” rather than full enterprise rollouts, they are not pilots in the usual PowerPoint sense. The system is ingesting real COBOL records, executing GPU-based inference on-prem, and returning auditable outputs into legacy transaction workflows. That alone took most of the $100M Series A, and it represents a nontrivial engineering and political accomplishment inside organizations that normally take years to approve anything new.
IronMind also met several of its original technical milestones. It successfully delivered deterministic replay, record-constrained generation, and regulatory logging that satisfy internal model-risk teams at two of the banks. That is a higher bar than many cloud AI vendors ever face. From a credibility standpoint, IronMind is now “real” in a way that was not obvious when we first reviewed the company.
At the same time, it is fair to say that some of the more ambitious expectations set in the original plan were not met. The company did not achieve the level of model-agnostic portability it hoped for; most customers are currently tied to a narrow set of inference stacks. Commercial traction has also lagged the narrative. After two years and $100M, IronMind has approximately $18M in annualized revenue under contract, with another $40M in signed but not yet fully ramped commitments. That is meaningful, but it is not yet the kind of scale that obviously supports a near-term IPO-style trajectory.
The challenge, as management frames it, is that IronMind sits in a category that moves at institutional rather than venture speed. Banks and governments do not roll out AI into payment, claims, and fraud systems the way SaaS companies deploy new CRMs. What IronMind has arguably done is de-risk the technology and the regulatory posture; what it has not yet done is unlock the kind of rapid horizontal scaling that public-market investors expect to see.
This is where the Series B story becomes both compelling and delicate. The $1B raise is not being positioned as “more runway to figure it out,” but as a deliberate attempt to force the ecosystem to move faster by doing three things simultaneously: expanding its GPU-backed on-prem footprint, acquiring a smaller mainframe middleware vendor to deepen distribution, and vertically integrating several industry-specific models (banking, insurance, government benefits) so customers can buy outcomes rather than infrastructure.
In other words, IronMind is now asking to become less of a clever bridge and more of a platform — something that sits so deeply in regulated institutions that it becomes part of their operating fabric. That is a big leap, but it is also the point at which companies in this space either become indispensable or fade into the background.
I think it is also worth noting that, despite the slower-than-hoped revenue ramp, IronMind has not been displaced. None of the hyperscalers or large integrators have meaningfully solved the “AI inside COBOL” problem yet. The fact that IronMind is still the company banks call when they want to try this at all is itself a form of validation, even if it has not yet translated into explosive growth.
So if I had to summarize where we are: IronMind has proven that the problem is real and that their solution works. They have not yet proven that it scales as fast or as cheaply as the original vision suggested. The proposed $1B round is essentially a bet that this gap is one of timing and institutional inertia rather than of fundamental product-market fit.
Given how often transformational enterprise platforms look slow and awkward in their first few years, that is not an unreasonable bet — though it is, inevitably, a patient one. In that respect, I can see why this story continues to appeal to you and to the kind of long-horizon capital that exists outside the traditional VC playbook. Whether that patience will be rewarded is, as usual, something that only becomes obvious after the fact, but IronMind at least remains one of the few companies still plausibly in the running to find out.
####
####
####
Designing a modern, high-performance LLM “in COBOL” is not realistic in the sense people usually mean (i.e., writing the training and inference engine itself in COBOL and expecting competitive performance). It’s possible in principle, but it fights the physics, the toolchain, and the ecosystem at every step.
Why it’s unrealistic
1) LLMs are basically giant linear algebra engines.
Training and inference are dominated by dense matrix multiplies, attention kernels, layernorms, and GPU/TPU primitives. The winning implementations live in CUDA/HIP, highly optimized C/C++, and compiler stacks like Triton, XLA, TVM, plus libraries like cuBLAS/cuDNN. COBOL’s strengths (record-oriented business logic, batch processing, fixed-format data) don’t map to “fused GPU kernels doing billions of FLOPs per second.”
2) You won’t get access to the performance levers you need.
Even if you wrote a correct COBOL implementation of inference, you’d be doing it largely on CPUs, with poor vectorization and limited control over memory layout and SIMD/GPU offload. Performance would be orders of magnitude behind a C++/CUDA stack for any nontrivial model.
3) The ML ecosystem isn’t there.
No serious COBOL-native equivalents of:
-
tensor libraries
-
autograd / backprop frameworks
-
kernel fusion toolchains
-
quantization toolkits
-
distributed training runtimes (NCCL-style collectives, sharded optimizers, etc.)
You could “call out” to those things from COBOL, but at that point COBOL is not the engine—it's the orchestration wrapper.
4) Memory layout + numerical types are unforgiving.
LLMs rely on careful handling of FP16/BF16/FP8, quantization (INT8/INT4), and tight control of contiguous tensors, alignment, caching, and bandwidth. COBOL’s data representation model is not built for that world. You’d spend most of your time fighting the language rather than improving the model.
The one way it is realistic
If “based on COBOL” means: COBOL runs the business workflow (requests, routing, logging, permissions, accounting, batch jobs), while the model inference happens in a separate service written in Python/C++/Rust and running on GPUs—then yes, that’s common and sane.
In that architecture:
-
COBOL does what it’s good at: transactional reliability, business rules, integration with mainframe data, audit trails.
-
The LLM runs where it should: GPU-backed inference server.
-
The interface is clean: REST/gRPC/MQ, or even file/batch integration in older shops.
So COBOL can be the “host organism” and the LLM the “organ.”
A useful analogy
Trying to build an LLM engine in COBOL is like trying to build a modern jet engine out of Victorian cast iron because you like steam locomotives. You can make a propulsion device, but it won’t be a competitive jet engine—and you’ll lose the ability to use the modern materials and manufacturing that make jets possible.
Bottom line
-
As the core training/inference implementation: not realistic for performance or practicality.
-
As the enterprise wrapper/orchestrator around an LLM service: very realistic, and arguably a good idea in COBOL-heavy organizations.
If you tell me whether you’re imagining mainframe-only, hybrid mainframe + cloud, or COBOL as a thought experiment, I can sketch a plausible architecture and where the boundaries would sit.