Factory CEO Matan Grinberg's Operating Model Insight Explains Why UKTV and Aerospace Are Solving the Same AI Problem
Matan Grinberg, CEO of Factory, said something this week that sounds obvious until you realize almost nobody is doing it. Scaling AI in software engineering, he told McKinsey, "depends less on flashy demos and more on changes to the operating model and how teams work." Not a product insight. Not a research breakthrough. A management insight. And it keeps appearing, from completely different industries, in the same week.
UKTV gave artificial intelligence to the CMO, not the CTO. Commercial aerospace suppliers who built 30-year moats on technical innovation are discovering they need salespeople. And behavioral researchers studying energy poverty in Cyprus found that reducing cognitive friction matters more than improving the underlying technology. Three stories that look unrelated. They're not.
The Demo Trap Is Costing Enterprises Real Money
Every major AI platform launch of the last three years has led with demos. The demo shows the capability. The company buys the capability. The capability sits underutilized inside an organization that hasn't changed how it decides, delegates, or coordinates.
Grinberg's point, spelled out in the McKinsey interview, is that this is the expected outcome - not a surprise. Factory builds AI agents for software engineering workflows, and they've watched enough enterprise deployments to know that the technical ceiling is rarely the binding constraint. The binding constraint is organizational: who owns the output, who reviews it, who is accountable when the agent is wrong, and how work gets decomposed so that agents can handle the right pieces.
This is what Grinberg means by operating model. It's not a vague organizational theory. It's a specific set of questions that most companies haven't answered before buying the software:
- Which decisions can an agent make autonomously versus which require human sign-off?
- How does accountability flow when an agent produces an incorrect result?
- What does "reviewing AI work" actually mean, practically, for the humans still in the loop?
Companies that can't answer those questions spend money on demos that never become workflows. This is the kind of pattern STI's research tracks systematically - the gap between enterprise AI adoption announcements and actual productivity impact.
What UKTV Understood That Most Companies Don't
UKTV's Penny Brough explained to MarketingWeek why adding AI to the marketing remit is a "natural fit." She leads the broadcaster's AI strategy, not the technology team. The choice to house AI accountability in the CMO function rather than IT or engineering is an operating model decision, and it's a consequential one.
Most organizations treat AI as a technology asset - meaning it lives in the CTO or CIO function. The CTO optimizes for technical correctness: model accuracy, latency, infrastructure costs. But if AI is touching consumer decisions, content recommendations, audience segmentation, or brand positioning, then optimizing for technical correctness while ignoring consumer impact is exactly backward. UKTV's logic is that the person accountable for audience outcomes should also be accountable for the tools that shape those outcomes.
This isn't just organizational tidiness. It changes what gets measured, what gets funded, and what gets iterated. When AI sits in engineering, the feedback loop is technical: did it work? When AI sits in marketing, the feedback loop is commercial: did it move the audience? The second question is harder to answer and therefore more likely to surface the places where the operating model needs to change.
Accountability as Organizational Signal
There's a secondary effect worth noting. Where a company places accountability for AI signals what it actually believes AI does. If you believe AI is an infrastructure layer, you put it in infrastructure. If you believe it shapes consumer experience, you put it with the function that owns consumer experience. UKTV's choice suggests they understand the latter - and that understanding tends to produce better outcomes than the former.
We've written before about why coordination is the real work of strategy - the companies that win aren't necessarily communicating more effectively, they're coordinating expectations across functions more precisely. Giving AI to the CMO is a coordination move as much as it is a technology move.
Aerospace Suppliers and the Death of the Technical Moat
The McKinsey aerospace piece tells a quieter version of the same story. Aviation suppliers built competitive advantages over decades on technical innovation - proprietary materials, specialized manufacturing processes, engineering depth. With few new aircraft platforms on the horizon, those moats are eroding. The opportunity now lies in commercial execution: pricing discipline, customer relationships, aftermarket service quality, supply chain coordination.
This is a significant structural shift. Technical excellence was the product. Commercial execution was considered a secondary capability - important but not the source of differentiation. Now it's the reverse. The engineering is table stakes. The commercial skill is where value accumulates.
Sound familiar? This is precisely what's happening with AI in software. The model capability - the technical excellence - is increasingly commoditized. What differentiates outcomes is organizational: how teams are structured around the technology, what decisions get delegated to it, how accountability is designed. The aerospace suppliers who recognized this early are hiring commercial talent and redesigning their go-to-market. The ones who didn't are watching margins compress despite their engineering depth.
When Capability Becomes Commodity
There's a useful frame from competitive strategy here. Capability becomes a moat when it's rare and hard to replicate. When a capability becomes widely accessible - whether through commoditized suppliers in aerospace or through widely available AI APIs in software - the moat shifts to adjacent capabilities: customer relationships, organizational execution, commercial design.
The aerospace case is instructive precisely because it's happening in a capital-intensive, technically complex industry where you'd least expect it. If technical moats can erode in aviation parts manufacturing, they can erode anywhere. The organizations that saw this signal early are already restructuring around what they do, not just what technology they use.
Cognitive Scarcity and the Friction Cost of Change
The behavioral economics angle comes from an unlikely direction. A case study on energy poverty in Cyprus examined why households trapped in energy poverty don't access available assistance programs. The finding: it's rarely because they don't know the programs exist. It's because accessing them imposes cognitive load - forms, deadlines, documentation, eligibility verification. The burden of navigating the system exceeds the energy households have available to spend on it.
Researchers found that simplifying access - reducing hassle factors, using social norms as nudges, reframing the eligibility criteria - changed outcomes more than improving the underlying technology. The technology was fine. The human experience of using it was broken.
Organizations adopting AI at scale face a version of this problem. The cognitive load on employees asked to work alongside AI systems - to review outputs, calibrate trust, escalate edge cases, learn new workflows - is frequently underestimated. Companies that deploy AI without redesigning the human experience of working with it are imposing cognitive scarcity on the people who are supposed to benefit. Grinberg's operating model point, viewed through this lens, is partly about cognitive load management: structuring work so that the humans in the system aren't overwhelmed by the overhead of AI collaboration.
What Reducing Friction Actually Looks Like
In software engineering, Factory has documented that the highest-friction moments in AI-assisted development aren't the hard technical problems - they're the ambiguous handoffs, the unclear accountability, the reviews where a developer doesn't know whether to trust the agent's output. Fix those friction points and productivity compounds. Leave them unaddressed and the AI becomes another tool people work around rather than with.
The same dynamic applies to UKTV's CMO-led AI strategy and to aerospace suppliers building commercial execution capabilities. The friction points aren't primarily technical. They're procedural and organizational. If you're evaluating how AI adoption is creating or destroying value in your sector, our analysis tools can help surface what the vendor presentations won't.
The Pattern Across All Three Industries
Step back from the individual stories and the pattern is clear. Factory, operating in software engineering. UKTV, operating in broadcasting. Aerospace suppliers, operating in aviation manufacturing. Three completely different industries reaching the same conclusion: the technical capability is accessible. The organizational capability is not.
In each case, the early winners are not the companies with the best technology. They're the companies that asked the organizational question first. Who owns this? What changes when we use it? Where does accountability live? What does the human workflow look like after, not just before?
What This Means for Decision-Making Under Uncertainty
Mortgage rates dropped modestly this week, and the NerdWallet analysis notes that markets are focusing on the long-term structural outlook rather than short-term rate volatility. The same logic applies here: in an environment of genuine uncertainty, organizations making multi-year bets on operating model redesign alongside AI adoption are positioning for durable advantage. Not the next demo cycle. The next decade of organizational design.
The companies investing in operating model alongside AI are betting that organizational capability compounds in ways that model capability doesn't. The evidence from Factory, UKTV, and aerospace suggests they're right.
The Operating Model Is the Product
The conclusion Grinberg reaches in the McKinsey interview is worth sitting with: scaling AI in software engineering depends on changes to how teams work. Not changes to what tools they use. How teams work.
That distinction - tools versus workflows, capability versus operating model - is the variable most organizations are optimizing for last. The ones optimizing for it first are building something more durable than a technological advantage. They're building organizational intelligence: the capacity to continuously integrate new capabilities without losing the coordination and accountability structures that make organizations function.
We've covered how the AI agent trust gap is ultimately a human problem, not a technical one. What Grinberg, UKTV, and the aerospace data confirm is that solving it requires organizational work - not more demos.
If your team is working through what AI adoption actually requires at the operating model level, the research frameworks at STI are built to answer those questions with data rather than vendor decks.