Kyndryl has launched a new policy-as-code framework designed to impose strict governance on agentic artificial intelligence systems operating in heavily regulated industries. The solution converts corporate policies and regulatory mandates into executable code, creating automated guardrails that monitor and restrict AI agent actions within financial services, public sector, and supply chain operations.
The IT infrastructure provider reports that nearly one-third of its clients identify regulatory compliance as a primary obstacle to expanding their technology investments. Kyndryl's system addresses this by logging every agent decision and preventing unauthorized steps, aiming to deliver what senior vice president Ismail Amla calls "the structure customers need" for responsible AI adoption.
This development arrives as enterprises increasingly deploy AI agents beyond basic assistance functions into core operational workflows where they initiate transactions and coordinate complex processes. Industry observers note that AI capabilities are advancing faster than corresponding management frameworks, creating potential accountability gaps when autonomous systems shift from supportive copilots to independent decision-makers.
Technology analysts emphasize the need for robust control mechanisms as agentic AI proliferates. Some experts advocate for dedicated "agent manager" roles to oversee AI systems, while others propose technical solutions like deterministic control planes that intercept agent outputs before they affect production environments. "Trust is not a feeling; it is a code module," noted one industry consultant.
Kyndryl's approach focuses on ensuring deterministic execution where AI agents operate strictly within predefined parameters. The company manages approximately 190 million automated processes monthly and positions its new capability as essential for moving AI deployments from experimental pilots to production systems in high-stakes scenarios.
The policy-as-code framework also aims to mitigate risks associated with AI hallucinations by blocking unpredictable or non-compliant actions within workflows. However, the company acknowledges that coded policies require careful design and ongoing maintenance to avoid either excessive restriction or inadequate oversight as AI models and business conditions evolve.



