Part of our Loyal Agents collaboration with the Consumer Reports Innovation Lab

How do we authenticate AI agents, control what they can access, and ensure accountability for their actions?

AI agents are rapidly evolving and increasingly represent us in a wide range of tasks. Therefore, limiting security risks and maintaining trust is critical.

This project addresses the full spectrum of identity challenges for AI agents, from immediate solutions using existing standards to forward-looking problems that will define the next generation of autonomous systems. It gathers experts in cryptography, distributed systems, policy frameworks, and AI safety to develop standards for agents that reliably act in their users’ best interests.

It’s also important that these agents operate at scale in enterprise, consumer, and ultimately cyber-physical contexts. By establishing robust authentication, authorization, and audit mechanisms now, we can enable the AI agent ecosystem to flourish without sacrificing privacy.

Visit our Loyal Agents page
Identity Management for Agentic AI

Lead Editor: Tobin South

The rapid rise of AI agents presents urgent challenges in authentication, authorization, and
identity management. Current agent-centric protocols (like MCP) highlight the demand
for clarified best practices in authentication and authorization. Looking ahead, ambitions
for highly autonomous agents raise complex long-term questions regarding scalable access
control, agent-centric identities, AI workload differentiation, and delegated authority. This
whitepaper is for stakeholders at the intersection of AI agents and access management. It
outlines the resources already available for securing today’s agents and presents a strategic
agenda to address the foundational authentication, authorization, and identity problems
pivotal for tomorrow’s widespread autonomous systems.