In my previous post, I argued that it is a strategic trap for the enterprise to wait for the emergence of an AI Utility before building an AI Factory to deploy its intelligent agents. The utility provides the raw power (tokens). However, the AI Factory requires a machine to process that power. That Proprietary Intelligence Engine is the enterprise’s machine. It is the infrastructure layer where its data and business logic live, both of which are required for realizing impactful and enduring value.
While planning the right AI move, the board and senior executive team often ask: “Why build an AI Factory or even a single AI application? We didn’t build the CRM application we use. We licensed Salesforce’s. Why couldn’t we apply our SaaS Playbook to address our AI needs?”
These are key questions every enterprise, regardless of industry, should be asking. The answer lies in understanding that AI is not just the next generation of enterprise software. It represents a fundamental shift in how the enterprise approaches its business processes.
The SaaS Playbook vs. The AI Reality
For the last 20 years, the “Buy vs. Build” enterprise software debate has been settled. The enterprise licensed cloud-based application and infrastructure software to address its needs. In doing so, it implicitly outsourced the business processes automated by the licensed applications. The customer record became a commodity. The parcel record became a commodity. The SKU record became a commodity.
The result of this transformation was a series of cloud-based standardized enterprise Systems of Record.
Now, software vendors are pitching the same logic for AI applications. “Don’t build your own AI application, or agent. Rent ours. It’s smarter, cheaper, etc.”
But here lies a dangerous trap. An agentic application isn’t just a System of Inference (predicting text) and Action (executing a script). It is a System of Agency.
The Difference Between Action and Agency
A state-of-the-art enterprise application today, e.g., a CRM application, that is a System of Inference and Action, can:
Infer: Predict a customer will churn.
Act: Execute a rule written by a human: “If churn risk > 80%, send a discount.”
In reality, a human previously decided on the treatment to remediate churn and coded it into the application.
A System of Agency is different. It accepts a Goal (e.g., “Maximize Lifetime Value”), and autonomously reasons and decides on the Strategy it will use to satisfy the goal. For example, it might reason: “This customer always consumes new content on the first day it becomes available. The customer has never used any of the discount offers we previously offered. Therefore, this customer responds better to new content than discounts. I will send a trailer for a new show instead of a discount coupon.”
A generic “Churn-Prevention Agent” offered by a vendor, even one that can be fine-tuned, e.g., using RAG, with the enterprise’s data, can be strategically inappropriate for the enterprise. It locks in the enterprise to the vendor’s model and relies on an agent that uses generalized probabilistic reasoning. While this may be acceptable to enterprises in some industries, it exposes the enterprise to a critical risk. The agent optimizes for the ‘average’ customer, not the enterprise’s customer. This misalignment can erode margins, create unexpected liabilities, and trap your data in a competitor’s moat.
The “Injection” Fallacy: Why You Can’t Just Rent the Brain
This reliance on RAG or other forms of fine-tuning falls victim to the Injection Fallacy. It conflates “Knowledge” with “Intelligence.”
Knowledge is your data (customer files, inventory logs, transaction history, and the relations among the data). Obviously, the enterprise can inject this into a third-party AI model.
Intelligence is the logic of how the enterprise uses that data to generate value. It expresses risk tolerance, pricing strategy, and your unique operational heuristics and recipes.
When the enterprise injects data into a vendor’s agent via vectors (the Latent Space), it is relying on the agent’s generalized logic to process the enterprise’s specific facts. Essentially, the enterprise is renting the reasoning.
An AI-first enterprise must be capable of seamlessly moving from a prioritized list of candidate AI applications addressing important use cases to prototypes of these applications to scaled systems. For this to work, they need to establish an AI Factory. The AI Stack includes the technology this factory uses. The enterprise decides which of the stack’s layers it will own and which it will rent. However, we argue that it must own the Proprietary Knowledge Layer.
The enterprise may decide to use a hybrid model, as AI-first companies Intuit, Walmart, and Visa do. They combine a third-party foundation model, in both cases OpenAI’s model, with their proprietary models. Alternatively, it may opt to own the entire AI stack, relying exclusively on its own AI models, as Recursion Pharmaceuticals and Ginkgo Bioworks do.
The enterprise’s agentic applications are neurosymbolic systems that utilize the AI stack. These systems fuse the structured business knowledge, whether encoded as deterministic rules or probabilistic causal models, with the optimal neural model capabilities.
The Factory Definition: Selection, Production, and Integration
The AI Factory integrates people, process, and the AI Stack to perform three critical functions that the enterprise cannot rent:
Selection: A common enterprise misconception is that AI applications must always utilize one of the massive (and expensive) Foundation Models, e.g., GPT-5. This often proves to be overkill. In many use cases, a Small Language Model, or a task-specific Vision Language Action Model, is a better selection.
Production: The Factory categorizes agentic systems into two buckets: Commoditized (which can be licensed) and Strategic (which must be owned). For example, like many other enterprises, airlines license agentic applications that automate software development. However, they develop the AI applications used for flight scheduling. The flight scheduling agent of United Airlines reasons differently from Delta’s.
Integration: The AI Factory acts as the anti-silo engine. It must ensure that the appropriate agentic applications are logically interconnected. When a manufacturer’s marketing agent launches a campaign, it must signal the production agent to ramp up inventory, and the distribution agent to secure freight capacity. For the manufacturer, these are must-own agents. They share the constantly updated Proprietary Intelligence Engine. If these agents are licensed, they become siloed black boxes.
The Pragmatist’s Guide: You Don’t Have to Build Everything
Creating the AI Factory, building the AI Stack, and implementing neurosymbolic agentic applications may appear daunting. AI-first early adopter enterprises, like Intuit and Walmart, may approach it as an existential imperative. Enterprises that belong to the late majority category may take a more measured approach. These enterprises may decide to only build the Orchestration and Proprietary Knowledge Layers, implement a few AI applications, and license everything else.
The Rule of Thumb: You can rent the muscles (Compute) and the general brain (Public Models), but you must own the memory (Knowledge, and Intelligence) and the nervous system (Orchestration).
Conclusion
The “SaaS Playbook” enabled the enterprise to quickly adopt best‑of‑breed cloud applications, standardize on vendors’ “good enough” business processes, and use lightweight integrations and governance to gain speed, lower upfront costs, and reduce IT friction. It fostered faster experimentation, easier scaling, and access to continuously improved capabilities. However, it produced fragmented data silos, weakened the idea of a single customer system of record, and pushed enterprises to conform to vendor workflows rather than preserving distinctive process advantages.
The “AI Factory” is about owning agency to retain and enhance value.
If the enterprise treats AI as a utility, it becomes a tenant. In a similar way to what happened with SaaS applications, it pays rent to hyperscalers and over time loses its memory and nervous system. By treating AI as infrastructure (factory, stack, neurosymbolic systems), the enterprise becomes a landlord. It capitalizes on its memory and builds its nervous system, both of which define its competitive advantage.


