top of page

When we published “Scaling AI Requires Scaling Trust,” we expected pushback and debate. What we didn’t expect was how much convergence there would be across people who usually sit in very different corners of the AI world: data and AI leaders, cybersecurity and infra architects, strategists, change practitioners and ethicists.


Across more than 70 expert voices, from founders and architects to national-lab scientists and enterprise transformation leaders, a single theme surfaced again and again:


AI is not failing because models are weak. It’s failing because organizations cannot see, interrogate, or trust how AI makes decisions. The blocker is not technology – it’s opacity.


Even commenters who challenged the statistics or the BCG 10-20-70 framing largely agreed on the underlying pattern: most AI spend is not turning into sustained, explainable business impact. One practitioner reduced the whole debate to a simpler test: “Does it make sense?” If people can’t answer that about an AI-driven decision, adoption stalls, no matter how sophisticated the model is.


What follows is a synthesis of some of the core ideas raised in the comments and reposts – and what these ideas collectively suggest about where enterprise AI needs to go next.


1. AI Fails in the Dark: Ambiguity, Loss of Agency, and Organizational Resistance 


Many commenters described a familiar pattern. AI initiatives stall not because people dislike technology, but because they distrust opaque decision-making. The fear isn’t AI itself. It’s not knowing why AI produces the outputs that affect people’s job, workflows, and accountability.


Several commenters made the same point in different language: people are not ready to outsource consequential decisions to a mysterious process. When AI arrives as a black box, employees experience it as something being done to them, rather than with them. In that context:


  • Resistance is often rational self-protection.

  • “Change fatigue” is a symptom of unclear decision logic.

  • Adoption becomes an ongoing negotiation with uncertainty.


As one commenter put it, people don’t resist technology. They resist the ambiguity wrapped around it.

 

2. The Real Bottleneck is Organizational Cognition, Not Data or Models


Several leaders reframed the problem not as “change management” but as something deeper. Enterprises are not cognitively equipped to adopt AI when its inner logic is unreadable, non-traceable, and non-interrogable.


It’s not enough for technical teams to understand the model.


  • Executives need to know what the system optimizes for, and where it might fail.

  • Risk and compliance teams need to see how decisions relate to policies and controls.

  • Frontline teams need a way to ask, “Why this recommendation instead of that one?” in their own language.


One AI architect in the thread called this a “70% epistemology [problem] – how a company reasons about a system smarter than its org chart.” Companies are wired for predictable, rule-based processes, but modern AI is probabilistic and constantly evolving. The gap makes it harder to understand and govern. Without a way to surface machine reasoning in human-legible terms (policies, workflows, incentives, constraints), even well-understood systems behave like black boxes for everyone outside the AI team.


3. From “Adoption” to “Transplant”


One repost offered a striking analogy: enterprise AI is less like adopting a tool and more “like an organ transplant.”


Consumer technologies (like general-purpose chat interfaces) are easy to absorb. They slip into existing habits with minimal coordination. Impactful enterprise AI is different: it’s introduced into living systems of roles, incentives, controls, and legacy infrastructure. If it’s not clearly defined, well placed, and continuously monitored, the organization will reject it.


In that framing, two conditions determine success:


  1. Clarity of problem and purpose: precisely what is the system meant to improve, and how will we measure it?

  2. Clarity of process and people: where does AI sit in the decision loop, and who is accountable for its outputs and exceptions?


Trust is earned when people can see how it works, understand the impact, and know who is accountable. With trust, the organization is far more likely to accept the change rather than wall it off as a threat.


4. Opacity Creates a Fog That Kills ROI – in Models and in Workflows


Multiple practitioners stressed ambiguity, not accuracy, as the hidden cost center. Without transparent reasoning, even accurate AI feels risky.


That ambiguity shows up as:


  • Hesitation to roll out beyond pilots because leaders don’t know how to explain logic to regulators, auditors, or customers.

  • Quiet workarounds where teams revert to spreadsheets and email because those tools, however manual, are at least understandable.


5. Leaders in Ethics and Governance Point to Accountability Gaps


Commenters working in ethics, risk, and governance pointed to a simple reality: AI must become auditable in the same way finance, security, or supply chains are. Without it, scaling AI becomes either a permanent pilot or a compliance nightmare.


In the original essay, we emphasized auditability. The discussion in the comments added an important nuance: governance is socio-technical. Humans define the rules and responsibilities; AI systems can help apply, monitor, and enforce those rules, but only if their reasoning is transparent and their behavior can be inspected and corrected.

To scale AI responsibly, organizations need:


  • Clear corporate values and policies that define acceptable decisions and trade-offs.

  • Decision pathways where human accountability remains explicit, even when AI is in the loop, and where AI agents can surface policy conflicts, inconsistencies, and edge cases instead of silently routing around them.


Trust now becomes the outcome of aligning AI reasoning with the same disciplines we expect in finance, security, and supply chains: well-defined controls, observable behavior, and documented responsibility.


A glass-box, neuro-symbolic AI framework – with a swarm of reasoning agents, an orchestrator, and humans in the loop – doesn’t replace governance. It strengthens it, by turning policies and constraints into something the machine can follow, explain, and improve on, rather than treat as an afterthought.


6. A Few Voices Challenged the Premise and Revealed Another Insight


Not everyone agreed with the framing. Some commenters questioned the premise behind the statistics or the way the problem was framed. Others argued that AI isn’t truly a black box, or that the real issue lies in brittle middleware and non-AI infrastructure.


The critique is useful because it surfaced a deeper problem: there is no shared definition of trust, transparency, or explainability in enterprise AI. The lack of shared language is itself part of the failure loop.


When stakeholders use the same words to mean different things, even well-intentioned initiatives fragment. Part of scaling trust is agreeing on what we are actually promising: not magic, not perfection, but systems whose behavior is visible, contestable, auditable, and aligned with human intent.


In Summary: The Industry Wants Glass-Box AI, Even If Not Everyone Calls It That


Across 70+ comments and reposts, five universal demands emerged, regardless of terminology or background:


  • Transparency: decisions and workflows that are no longer opaque.

  • Reasoning explainability: inspectable logic paths rather than inscrutable outputs.

  • Governability and auditability: decisions that can be tested, challenged and defended.

  • Causal traceability across both decisions and execution: the ability to follow “what caused what” across states and systems with clear evidence.

  • Interrogable decision pathways with clear human agency: people can question, override, and redirect AI, and know who owns the outcome.


These are precisely the design pillars of Latent-Sense Technologies.


Glass-box AI is not an enhancement to AI. It is the missing infrastructure that turns intelligence into something organizations can understand, govern, and build around. It is what allows AI to become safe, trustworthy, compliant, and economically more valuable.


And ultimately, ready for real enterprise adoption.

Updated: Nov 22


Glass-box AI
Glass-box AI

The real bottleneck isn’t AI – it’s us (humans)

Over the past year, two narratives about enterprise AI have been running in parallel. On one side, MIT-affiliated Project NANDA researchers warn of a widening “GenAI divide,” reporting that “just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact” (Challapally, Pease, Raskar, Chari, 2025). When “95% of organizations are getting zero return” on their AI investments, this is not just a model problem, it’s a management problem (Challapally et al., 2025).


On the other side, Boston Consulting Group (BCG) has pointed to what many of us see in the field every day: the real bottleneck is people and processes. As BCG puts it, successful AI programs follow a 10-20-70 balance - 10% algorithms, 20% tech and data, and 70% change in the way the business actually works (Apotheker et al., 2025; Boston Consulting Group, 2024).  


Taken together, these perspectives are not in conflict; they’re a map. The first tells us the impact gap is real and costly. The second tells us where to look to close this gap. In my experience, that “70% people and processes” is rarely cultural reluctance – most executives are under pressure to build an AI strategy. The roadblock is simpler: it’s hard to drive change around something opaque. The inherent black-box nature of LLMs makes AI strategy hard to own. If leaders can’t interrogate a system’s reasoning, trace a decision to policy, or prove compliance on demand, they will not sign, deploy, or scale – no matter how impressive the demo is.  


BCG’s 10-20-70 and why it’s a call to action

Enterprises don’t fail to scale AI because their models are weak; they fail because their organizations can’t see themselves in the system’s decisions. The data bears this out. Project NANDA’s analysis argues the divide is “not…driven by model quality” so much as by method: enterprise fit, integration, and learning (Challapally et al., 2025). Generic LLM chat experiences may look implemented, but they “mask” weak impact when the underlying logic can’t be interrogated or audited. In other words, transformation is a function of approach, not model size.


If 70% of the challenge is change management, the solution isn’t merely to ship better models, but to make adoption intelligible, governable, and owned by the business. A glass-box neuro-symbolic layer that captures causal trails and embeds policy checks gives executives visibility and agency – the conditions for change.


The agency gap: why black-box AI makes change management harder

We have spent years talking about data sovereignty – where the data lives and how it is governed. We’ve spent far less time on human sovereignty: who inside the enterprise has the authority, tooling, and domain expertise to question, override, and redirect machine decisions. In most LLM-centric deployments, the answer is: no one. Reasoning is effectively opaque – an artifact of statistical associations rather than interpretable logic. Even with “human-in-the-loop,” the human is often reactive, not given the agency to interrogate an argument or a conclusion.


The majority of enterprise AI solutions are built on LLMs. These LLMs are inherently a black-box, routing decisions through these opaque statistical associations (the weights). Enterprise AI sales teams ask executives to trust outputs they cannot trace, align or redline – and then wonder why buying committees hesitate. Sales leaders will call this a “resistance to change,” but to be more precise (and fair), it’s a lack of agency – human agency – within the AI enterprise solution that creates this hesitation. People will not own what they cannot question.


Glass-box reasoning as change management catalyst

A glass-box approach restores the missing (human) agency and brings trust back into the loop. Latent-Sense Technologies’ (LST)  neuro-symbolic reasoning ecosystem (ReX, rxMaps, and Orchestrator) makes every decision traceable, transparent, explainable, and policy-aware. Decisions flow through policy-checked steps that legal, risk and operations can see and adjust in real time. Compliance isn’t an afterthought, it’s designed in. Every action leaves a transparent audit trail that shows what happened, why it happened, and how to refine it next time.


Most importantly, the glass-box design empowers subject matter experts instead of sidelining them. Business and technical teams share the same view of how decisions are made, giving everyone from executives, compliance officers, to operators a real seat at the table.


This isn’t just about regulation. This is about visibility, accountability, responsible AI, and shared agency – the catalyst that turns resistance into trust and confidence.


Bridging machine cognition and human cognition

Neuro-symbolic AI is not just “more explainable AI.” It combines human reasoning (causal, policy-based, context-aware) with machine scale (speed, pattern recognition, data scope). When the reasoning behind decisions is explicit, auditable, and transparent, people stop fearing the machine and start embedding it into their evidence-informed decision making. That’s the moment AI stops being a pilot and, latently, becomes part of the organization’s decision fabric. 


Why this matters for strategic partners

Why does this matter to partners focused on scaling transformative AI solutions? Because scaling transformation requires scaling trust and alignment. BCG’s 10-20-70 isn’t a slogan; it’s an operating reality (Apotheker et al., 2025). The majority of the work (and the risk) lives in the human layer.


A glass-box AI approach is a practical accelerator for that 70%, as control points are explicit so legal and risk can engage early. It brings frontline subject matter experts into co-design, because the system reflects their reasoning rather than replacing it. And it shortens sales-cycles – not by adding flash to the demo, but by reducing ambiguity in procurement, security review, and executive sign-off. When decision makers can see how the system thinks, they can see themselves in it.


This doesn’t negate the importance of the “10” and the “20.” You still need capable models with a robust stack. MIT’s report is explicit about the implementation edge: “External partnerships see twice the success rate of internal builds” (Challapally et al., 2025). That isn’t a burn on internal teams; it’s a recognition that scaling change requires capabilities and patterns most organizations don’t keep on the shelf.


If you want durable ROI, you must solve the human-machine boundary. In a glass-box neuro-symbolic architecture, the reasoning layer functions like an internal audit trail and an external assurance artifact. It’s evidence that you can hand to a regulator, a board committee, or a customer. It’s evidence that the system made the decision it was supposed to make, for the reasons your policies and regulations allow.


That’s the change-management unlock, and people stop fearing the machine and start co-reasoning and co-authoring with it.

For strategic partners, this reframes the go-to-market. Endless proofs-of-concept (POCs) rarely convert when the objection is trust. The right move is to define the glass-box AI layer as part of the solution from day one and contract for transformation, not just experimentation.


From POC to MSA – the strategy for transformational AI

POCs don’t fail because they don’t work; they fail because they don’t fit. They don’t fit with how decisions are justified, how risk is governed, and how accountability is shared. A glass-box AI stack lets you start where the risk really lives and enables traceability, policy alignment, and human oversight.


The path is straightforward:

·       Treat reasoning as product, not by-product.

·       Elevate policy to a first-class dependency of your AI.

·       Make human agency a design constraint, not a training afterthought.


If we do that, the “GenAI divide” won’t be a cautionary headline; it will be a closing gap. That’s the point of glass-box AI reasoning. It demystifies the system, turns leaders into editors instead of spectators, and makes the ledger of decisions part of day-to-day operations – not an after-action report. When executives can interrogate why and not just read what, they sign faster, deploy bolder, and scale further.


Transformation only happens when people feel agency in the change. Our job isn’t just to build AI that works – it’s to build AI people can interrogate, trust, and work with. 


References

Apotheker, J., Duranton, S., Lukic, V., de Bellefonds, N., Iyer, S., Bouffault, O., & de Laubier, R. (2025, January 15). From potential to profit: Closing the AI impact gap. Boston Consulting Group.

Boston Consulting Group. (2024, December 12). The leader’s guide to transforming with AI. Boston Consulting Group.

Challapally, A., Pease, C., Raskar, R., & Chari, P. (2025, July). The GenAI divide: State of AI in business 2025. MIT Media Lab, Project NANDA.



LST - Cognitive AI

Latent-Sense Technologies

112 -970 Burrard Street, 

Unit 1330, Vancouver,

BC V6Z 2R4, Canada.

  • LinkedIn

Linkedin.

© 2025 by Latent-Sense Technologies Inc.

bottom of page