Imagine a virtual workforce that never sleeps. As agentic AI capabilities expand, enterprise leaders are tempted by the promise, but is the reality as seamless as the vision?
For all the promise, the technology remains entangled in the complexities of regulation, data quality, governance, and cost. As organizations move beyond the early hype of generative AI, a more sober question emerges: what does enterprise-grade autonomy actually look like in practice?
That question took center stage in a recent roundtable hosted by the AIM Leaders Council, where senior leaders from banking, manufacturing, marine logistics, and consumer technology gathered to share experiences deploying agentic systems in their own organizations.
The session was moderated by Preeti Suryaprakash, Director at GE Appliances. Participants included Vikrant Aglawe, Director, Shared Service Centres at DP World, Kulbhooshan Patil, Head of Data Science and Analytics at Tata AIG, Gopinath Chidambaram, Technical Director at Ford, Saurabh Pramanick, Data Governance Officer at Bank Muscat, and Paddy Padiyar Executive Director, Transformation at OCBC.
The Reality Check on Agentic AI
According to an Infosys report only 12% of surveyed enterprises have moved beyond pilot stages as of 2025
Despite the excitement surrounding autonomous systems, practical deployment remains limited. Most implementations are still confined to early-stage pilots or internal productivity tools, far from the end-to-end autonomy often envisioned. Full-scale replacement of human workflows is neither feasible nor desirable at present.
Agentic AI has shown promise in automating repetitive tasks such as document generation, compliance checks, and multi-agent information synthesis. Some environments are already seeing early productivity gains in areas like code migration, asset review processes, and internal orchestration. Yet the prevailing approach remains augmentation, not replacement.
These systems are consistently designed with human-in-the-loop safeguards (like audit trails, override controls, layered access permissions) as a structural necessity. Autonomy is limited by trust. Systems may initiate actions, but final decisions still rest with humans, and that’s unlikely to change in the short term.
Governance First, Deployment Second
Implementation is governed by the friction of internal processes. Even well-scoped use cases are subject to extensive review, often involving multiple oversight bodies that evaluate systems for architectural fit, data privacy, regulatory compliance, and cybersecurity readiness.
Far from being bottlenecks, these layers of governance are seen as essential. In high-stakes environments where AI outcomes carry financial or reputational risk, trust in the system must be established upfront. That trust is rarely based on performance alone; it depends on demonstrating transparency, control, and accountability at every stage.
This cautious approach has become the norm. Autonomy is being pursued, but under the close watch of committees, checklists, and control gates. The goal is now sustainability.
Reframing ROI in the Age of AI
Return on investment is no longer evaluated through the narrow lens of cost savings or full-time equivalent reductions. The economics of agentic AI demand a more nuanced model. Implementation costs now include GPU infrastructure, model orchestration, licensing, and ongoing maintenance. These systems are capital-intensive, and traditional productivity metrics rarely capture the full picture.
As a result, teams are building their own estimation frameworks, some as simple as Excel models, to evaluate feasibility before committing to large-scale initiatives. Variables like token throughput, response latency, compute cost, and potential business impact are now factored in from the outset.
ROI is being reframed. In many cases, the value lies in acceleration of decision cycles, reduction in turnaround times, improved data quality, or the ability to unlock previously infeasible workflows. These benefits are real, but less easily quantified.
Infrastructure and Data as Foundations
Agentic AI does not exist in a vacuum: it depends on clean data, modular architectures, and seamless orchestration. Building autonomous systems means first solving foundational challenges around data pipelines, metadata labeling, privacy-preserving transformations, and system interoperability.
In some environments, retrofitting AI into legacy systems proves more costly than starting from scratch. Where possible, organizations are opting to rebuild with agent-first design principles, ensuring that orchestration, composability, and observability are baked in from the start.
Even simple use cases, like auto-approving low-risk internal requests, require careful modeling, clear decision boundaries, and fallback mechanisms. These are not one-shot automation scripts. They are complex, multi-agent workflows that must remain interpretable, correctable, and resilient.
A Future of Ambiguity and Agency
As the capabilities of agentic AI improve, the line between automation and autonomy will blur. The systems are designed to reason, act, and even debate. And while the current state still depends heavily on human oversight, the direction is clear. Systems will take on more responsibility for deciding what to do next.
This shift introduces new risks. One is the erosion of human judgment, especially as systems become more persuasive and less transparent. Another is the subtle deskilling of teams, as the agent becomes the expert and the human merely the approver. Over time, the role of the human may shift from decision-maker to escalation point: a passive participant in workflows they no longer fully understand.
There is also a cultural risk: that human capabilities are dismissed as inefficient or obsolete in comparison to AI. Even when framed as augmentation, the tools being built will force a redefinition of human value in organizations. It is now a question of: How far should responsibility shift from humans to machines?
The Path Forward
Agentic AI is a complex capability that requires infrastructure, governance, and organizational clarity. The most promising deployments are those where the technology is matched with disciplined scoping, clear oversight, and a bias toward augmentation over automation.
This means redefining the relationship between people and machines: one where agency is shared, responsibility is distributed, and decision-making is redesigned from the ground up. The future of agentic AI belongs to organizations that blend discipline with imagination.