Enterprises in the Asia-Pacific (APAC) region are rapidly adopting AI agents, with IDC reporting that 70% expect these systems to disrupt business models within 18 months. By 2025, two in five companies are already deploying agentic AI, and over half plan to integrate them by 2026. While these autonomous tools accelerate software development and innovation, they also introduce new layers of risk, from security threats to compliance challenges.
Recent findings highlight the scale of concern. Lenovo research shows that only 48% of IT leaders feel confident managing AI-related risks. Meanwhile, more than 60% believe AI agents introduce new forms of insider threats. A third of APAC firms cite security and data privacy as top concerns, but the risks extend deeper into software supply chains, governance, and compliance.
Key risks include:
- Security vulnerabilities: Compromised AI agents can spread malicious activity across interconnected systems.
- Software supply chain threats: Open-source packages and pretrained models raise exposure to poisoned code or malware.
- Governance challenges: Black-box decision-making and embedded bias hinder explainability and accountability.
- Shadow AI: Unsanctioned agents operating without oversight create audit blind spots.
Policymakers are responding with stricter requirements. India, for instance, is pushing for mandatory AI bills of materials to ensure traceability of models and outputs. Enterprises must now prove not only what agents produced. But also how they reached their conclusions and whether results meet compliance standards.
Experts suggest three strategies for sustainable adoption: establishing trusted systems of record for agents, enabling hybrid human–agent development oversight, and cultivating “agentic engineers” who merge coding, AI, and compliance skills.
As APAC enterprises face this quantum shift, the focus is shifting from speed of deployment to secure, explainable, and compliant integration of AI agents across the software development lifecycle.
Source:

