As AI agents shift from simple assistants to autonomous actors inside business systems, a critical governance question is emerging: who is responsible when an AI agent makes a decision that causes harm? Experts warn that enterprises are moving faster than the legal and security frameworks needed to govern this new class of digital actors.
Speaking with iTNews Asia, Tobin South — Head of AI Agents at WorkOS and Co-Chair of the OpenID Foundation’s AI Identity Management Community Group — said most AI agents today operate by impersonating human users, accessing systems in ways indistinguishable from actual employee actions. Without explicit identity separation, he warns, companies face “massive accountability blind spots.”
South draws parallels to the early internet. But unlike website defacement, compromised AI agents could autonomously initiate financial fraud, alter medical records, or spawn additional agents before detection.
Key Highlights:
- Impersonation-based access makes it impossible to tell whether actions were taken by a human or agent.
- Fragmented identity systems across vendors multiply vulnerabilities.
- Lack of audit trails means malicious agent activity can blame on innocent users.
- Recursive delegation — agents creating or directing other agents — breaks traditional permission models.
- Legacy systems in finance and healthcare unprepare for machine-speed decision-making.
To close this widening accountability gap, South argues that identity must be rebuilt for the agent era. That includes agent-native identities, real-time permission delegation, and cross-industry trust standards that ensure every AI action cryptographically tied to both the agent and its authorising human.
He warns that without immediate action, organisations risk embedding untraceable, ungovernable agents into millions of workflows.
Source:
https://www.itnews.asia/news/when-ai-agents-act-who-islegallyresponsible-621910

