New Delhi: The roundtable on “Agentic AI” on the fifth day of India AI Impact Summit 2026 brought together global tech industry, policy, and legal leaders to examine a pivotal shift in Artificial Intelligence (AI) from systems that support human decision-making to autonomous agents capable of executing complex tasks across enterprises. Structured across two high-level panels on Business & Industry and Policy Perspectives, the discussion focused on how this transition is redefining safety, accountability, cybersecurity, and public trust, even as it unlocks new productivity and innovation gains.
The first panel presented real-world deployment scenarios and insights from sectors including payments, cloud infrastructure, cybersecurity and intelligent product design. Speakers highlighted that as AI agents begin to operate in interconnected environments, the scale and impact of both value creation and risk increase significantly, making verification, data governance, system security and human oversight fundamental to adoption.
United States Patent and Trademark Office (USPTO) Director Austin Mayron highlighted the role of standards-led collaboration in enabling responsible innovation. Emphasising the government’s function as an enabler rather than a distant regulator, he said that their approach should be to learn directly from real-world deployment challenges so that standards and guidance can unlock that potential responsibly.
Synopsys Innovations Group Senior Vice President and Member Prith Banerjee drew attention to the shift from digital AI to safety-critical physical systems, noting that responsibility now extends to software-defined cars, aircraft, and other real-world infrastructure. Stressing rigorous engineering validation, he cautioned that in such environments the consequences can be dangerous making ‘responsible and safe agentic engineering’ essential before deployment.
Mastercard’s Chief Privacy Officer Caroline Louveaux positioned trust as the foundation for scaling autonomous systems in financial services. Outlining key guardrails, she said, “autonomy can only scale if there is trust,” which requires verified agent identity, security by design, clear consumer intent, and full traceability so that adoption becomes possible at scale.
NetApp’s Chief Product Officer Syam Nair highlighted the amplified impact of errors in interconnected agent ecosystems, noting that “the blast radius of mistakes becomes much larger” when agents operate across networks. He stressed that strong data governance, defined operating boundaries, and clear accountability are critical because “agents have no empathy or situational judgment,” leaving responsibility with the enterprise.
While the second panel discussion shifted to governance and regulatory questions, emphasising interoperable standards, adaptive regulatory models, global coordination, and operational clarity for industry. Across both discussions, participants converged on the central message that responsible scale will depend on embedding trust, security, and human-centric design into the architecture of agentic systems from the outset.

Adobe Inc’s Public Policy Director Jennifer Mulveny underscored that governance must remain anchored in human outcomes even as policy begins to regulate complex technological systems to ensure innovation advances people-first outcomes.
Google Inc’s Manager for AI & Emerging Tech Policy Ellie Sakhaee pointed to the still-evolving risks of multi-agent interaction, noting that the risk surface changes significantly once agents begin operating together. She emphasised the need for joint work across academia, industry and governments to develop benchmarks and evaluation methods before large-scale deployment.
Palo Alto Networks’ Assistant General Counsel for Public Policy & Government Affairs Sam Kaplan positioned security as the foundational layer for adoption, observing that agentic AI transforms risk from a ‘two-dimensional’ to a ‘three-dimensional’ challenge with potential real-world consequences and stressed that without securing models and agents, it is very difficult to scale AI safely.
Salesforce Director of Global Public Policy Danielle Gilliam-Moore expanded the definition of governance beyond regulation to include standards, global norms and internal assurance mechanisms. Highlighting the role of sectoral regulators and standards bodies, she noted that governance “is much broader than regulation” and is critical to enabling adoption while managing risk.
Cloudflare Director & Head Asia Pacific, Japan and China Public Policy Carly Ramsey highlighted Accessibility, open standards and regulatory harmonisation as prerequisites for inclusive agentic AI. Stressing the cross-border nature of technology, she noted technology does not stop at borders, making compatible national frameworks essential for trustable global scale.
ServiceNow’s Global Head of Government Affairs & Public Policy Combiz Richard Abdolrahimi emphasised the industry’s need for operational clarity as technology evolves. Calling for practical implementation tools rather than abstract principles, he said what organisations require are clear standards, practical playbooks, and model frameworks that can evolve alongside innovation.
The two round-table discussions brought industry and policy leaders together to discuss the parallel challenges of deploying and governing agentic AI, acknowledging that these systems are already moving into real-world use.

