The boardroom conversations have shifted. Eighteen months ago, executives competed to announce their GenAI initiatives: chatbots, copilots, and “AI-first” strategies dominated quarterly calls. Today, those same leaders are asking questions: Where’s the ROI? Why do our AI systems still hallucinate? How much are we actually spending on this? The honeymoon phase is over. In Gartner’s 2025 Hype Cycle, generative AI has now entered the “Trough of Disillusionment.” As The Economist summed it up with Fortune 500 executives: “We’ve spent money on this, but it’s just not working.”
The organizations succeeding today share a common thread. They focused on building three critical foundations that most companies overlooked: data infrastructure that can reliably feed AI systems, targeted use cases with clear metrics and measurable impact, and comprehensive training programs paired with governance frameworks that ensure responsible scaling.
Foundations First: Data Quality and Infrastructure Matter More Than Models
Most companies got their GenAI strategy backwards; they rushed to build chatbots before fixing their basic data problems. Those ahead of the curve understand that reliable AI depends on clean data pipelines, retrieval systems, and governance frameworks. Without these foundations, even the smartest AI models produce inconsistent, unreliable results that frustrate users and waste money.
BNP Paribas exemplifies this infrastructure-first approach. Earlier this year, they launched an internal “LLM as a Service” platform operated by core IT teams, providing security, multi-model access, and governed rollouts across all business units. Rather than letting each department build isolated AI experiments, they created shared, compliant infrastructure that business teams can plug into safely. This platform approach enables the bank to scale AI capabilities quickly while maintaining security and compliance standards, proving that boring infrastructure work delivers better results than exciting pilot projects.
Targeted Use Cases
AI success comes not from scale, but from focus and zeroing in on high-value problems with measurable impact. Unfocused AI adoption spreads resources too thin, increases the risk of hallucinations, and makes ROI difficult to demonstrate. Companies that succeed pick their battles carefully, targeting use cases with clear metrics and structured data.
JPMorgan demonstrates this focused approach perfectly. Instead of trying to revolutionize everything at once, they built an in-house coding assistant specifically for their engineering teams. The results are concrete: 20% productivity gains across tens of thousands of developers. This wasn’t about transforming the entire bank overnight, it was about solving one specific problem (developer efficiency) extremely well. The targeted approach allowed them to measure impact precisely, refine the system based on real usage, and build confidence for future AI investments. Across industries, leaders rushed to experiment with GenAI. But excitement often replaced clarity, and pilots rarely delivered the impact expected.
In my own experience, I worked on a GenAI use case, its slow progress underscored a critical truth that the problem itself lacked clarity. This isn’t an isolated case. Across industries, many business cases are framed vaguely as “Use GenAI to transform the platform.” Such positioning may sound ambitious, but it is shallow. One-shot, fix-all approaches fail. What works is clarity and precision. Without that, GenAI projects turn into costly experiments instead of real business value.
Skills and Governance: Building Trust Through Training and Oversight
The AI skills gap and weak governance are the biggest roadblocks to scaling GenAI successfully. Most organizations underestimate how much training their workforce needs to use AI tools effectively, while simultaneously lacking the oversight frameworks to prevent costly mistakes. Without proper AI literacy, employees either avoid the technology entirely or use it incorrectly, leading to poor outcomes that damage confidence in future AI investments. Meanwhile, weak governance creates compliance risks and operational failures that can derail entire programs.
Barclays demonstrates how to address both challenges simultaneously through their GenAI Center of Excellence. This organizational structure combines three critical functions: enablement through systematic training for thousands of colleagues, governance through reusable components and enterprise standards, and innovation through structured hackathons that channel creativity within safe boundaries. The Center of Excellence model works because it treats AI literacy as a core business skill that requires formal development, not something employees can figure out on their own. This need for structure and maturity is not unique to AI; it echoes the trajectory of every major technological leap in history.
Fire enabled civilization but also caused devastating wildfires. The printing press spread knowledge and propaganda equally. Electricity promised miraculous cures before finding its true purpose. The dot-com boom crashed before the internet reshaped commerce. Each breakthrough demanded time, discipline, and hard-won wisdom to harness safely.
GenAI follows this same pattern. The initial excitement has given way to practical challenges hallucinations, infrastructure gaps, and unclear returns. But the companies succeeding today aren’t abandoning the technology; they’re building it properly. They invest in data quality before deploying models. They focus on specific problems rather than platform transformation. They train their people and establish governance frameworks that ensure responsible scaling.
Like fire and electricity, GenAI is transformative but unruly. It demands guardrails, not blind optimism. The future is here, but it doesn’t come on a plate, cooked and served.