- Dena Fradette

- Nov 14
Updated: Nov 22

The real bottleneck isn’t AI – it’s us (humans)
Over the past year, two narratives about enterprise AI have been running in parallel. On one side, MIT-affiliated Project NANDA researchers warn of a widening “GenAI divide,” reporting that “just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact” (Challapally, Pease, Raskar, Chari, 2025). When “95% of organizations are getting zero return” on their AI investments, this is not just a model problem, it’s a management problem (Challapally et al., 2025).
On the other side, Boston Consulting Group (BCG) has pointed to what many of us see in the field every day: the real bottleneck is people and processes. As BCG puts it, successful AI programs follow a 10-20-70 balance - 10% algorithms, 20% tech and data, and 70% change in the way the business actually works (Apotheker et al., 2025; Boston Consulting Group, 2024).
Taken together, these perspectives are not in conflict; they’re a map. The first tells us the impact gap is real and costly. The second tells us where to look to close this gap. In my experience, that “70% people and processes” is rarely cultural reluctance – most executives are under pressure to build an AI strategy. The roadblock is simpler: it’s hard to drive change around something opaque. The inherent black-box nature of LLMs makes AI strategy hard to own. If leaders can’t interrogate a system’s reasoning, trace a decision to policy, or prove compliance on demand, they will not sign, deploy, or scale – no matter how impressive the demo is.
BCG’s 10-20-70 and why it’s a call to action
Enterprises don’t fail to scale AI because their models are weak; they fail because their organizations can’t see themselves in the system’s decisions. The data bears this out. Project NANDA’s analysis argues the divide is “not…driven by model quality” so much as by method: enterprise fit, integration, and learning (Challapally et al., 2025). Generic LLM chat experiences may look implemented, but they “mask” weak impact when the underlying logic can’t be interrogated or audited. In other words, transformation is a function of approach, not model size.
If 70% of the challenge is change management, the solution isn’t merely to ship better models, but to make adoption intelligible, governable, and owned by the business. A glass-box neuro-symbolic layer that captures causal trails and embeds policy checks gives executives visibility and agency – the conditions for change.
The agency gap: why black-box AI makes change management harder
We have spent years talking about data sovereignty – where the data lives and how it is governed. We’ve spent far less time on human sovereignty: who inside the enterprise has the authority, tooling, and domain expertise to question, override, and redirect machine decisions. In most LLM-centric deployments, the answer is: no one. Reasoning is effectively opaque – an artifact of statistical associations rather than interpretable logic. Even with “human-in-the-loop,” the human is often reactive, not given the agency to interrogate an argument or a conclusion.
The majority of enterprise AI solutions are built on LLMs. These LLMs are inherently a black-box, routing decisions through these opaque statistical associations (the weights). Enterprise AI sales teams ask executives to trust outputs they cannot trace, align or redline – and then wonder why buying committees hesitate. Sales leaders will call this a “resistance to change,” but to be more precise (and fair), it’s a lack of agency – human agency – within the AI enterprise solution that creates this hesitation. People will not own what they cannot question.
Glass-box reasoning as change management catalyst
A glass-box approach restores the missing (human) agency and brings trust back into the loop. Latent-Sense Technologies’ (LST) neuro-symbolic reasoning ecosystem (ReX, rxMaps, and Orchestrator) makes every decision traceable, transparent, explainable, and policy-aware. Decisions flow through policy-checked steps that legal, risk and operations can see and adjust in real time. Compliance isn’t an afterthought, it’s designed in. Every action leaves a transparent audit trail that shows what happened, why it happened, and how to refine it next time.
Most importantly, the glass-box design empowers subject matter experts instead of sidelining them. Business and technical teams share the same view of how decisions are made, giving everyone from executives, compliance officers, to operators a real seat at the table.
This isn’t just about regulation. This is about visibility, accountability, responsible AI, and shared agency – the catalyst that turns resistance into trust and confidence.
Bridging machine cognition and human cognition
Neuro-symbolic AI is not just “more explainable AI.” It combines human reasoning (causal, policy-based, context-aware) with machine scale (speed, pattern recognition, data scope). When the reasoning behind decisions is explicit, auditable, and transparent, people stop fearing the machine and start embedding it into their evidence-informed decision making. That’s the moment AI stops being a pilot and, latently, becomes part of the organization’s decision fabric.
Why this matters for strategic partners
Why does this matter to partners focused on scaling transformative AI solutions? Because scaling transformation requires scaling trust and alignment. BCG’s 10-20-70 isn’t a slogan; it’s an operating reality (Apotheker et al., 2025). The majority of the work (and the risk) lives in the human layer.
A glass-box AI approach is a practical accelerator for that 70%, as control points are explicit so legal and risk can engage early. It brings frontline subject matter experts into co-design, because the system reflects their reasoning rather than replacing it. And it shortens sales-cycles – not by adding flash to the demo, but by reducing ambiguity in procurement, security review, and executive sign-off. When decision makers can see how the system thinks, they can see themselves in it.
This doesn’t negate the importance of the “10” and the “20.” You still need capable models with a robust stack. MIT’s report is explicit about the implementation edge: “External partnerships see twice the success rate of internal builds” (Challapally et al., 2025). That isn’t a burn on internal teams; it’s a recognition that scaling change requires capabilities and patterns most organizations don’t keep on the shelf.
If you want durable ROI, you must solve the human-machine boundary. In a glass-box neuro-symbolic architecture, the reasoning layer functions like an internal audit trail and an external assurance artifact. It’s evidence that you can hand to a regulator, a board committee, or a customer. It’s evidence that the system made the decision it was supposed to make, for the reasons your policies and regulations allow.
That’s the change-management unlock, and people stop fearing the machine and start co-reasoning and co-authoring with it.
For strategic partners, this reframes the go-to-market. Endless proofs-of-concept (POCs) rarely convert when the objection is trust. The right move is to define the glass-box AI layer as part of the solution from day one and contract for transformation, not just experimentation.
From POC to MSA – the strategy for transformational AI
POCs don’t fail because they don’t work; they fail because they don’t fit. They don’t fit with how decisions are justified, how risk is governed, and how accountability is shared. A glass-box AI stack lets you start where the risk really lives and enables traceability, policy alignment, and human oversight.
The path is straightforward:
· Treat reasoning as product, not by-product.
· Elevate policy to a first-class dependency of your AI.
· Make human agency a design constraint, not a training afterthought.
If we do that, the “GenAI divide” won’t be a cautionary headline; it will be a closing gap. That’s the point of glass-box AI reasoning. It demystifies the system, turns leaders into editors instead of spectators, and makes the ledger of decisions part of day-to-day operations – not an after-action report. When executives can interrogate why and not just read what, they sign faster, deploy bolder, and scale further.
Transformation only happens when people feel agency in the change. Our job isn’t just to build AI that works – it’s to build AI people can interrogate, trust, and work with.
References
Apotheker, J., Duranton, S., Lukic, V., de Bellefonds, N., Iyer, S., Bouffault, O., & de Laubier, R. (2025, January 15). From potential to profit: Closing the AI impact gap. Boston Consulting Group.
Boston Consulting Group. (2024, December 12). The leader’s guide to transforming with AI. Boston Consulting Group.
Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025, July). The GenAI divide: State of AI in business 2025. MIT Media Lab, Project NANDA.
.png)