Featured Posts
Agentic artificial intelligence (AI) is the next frontier for companies and organizations that are using AI. Agentic AI can select and carry out actions on a user’s behalf based on instructions, context, and the permissions it has been configured to use. As organizations integrate these systems and capabilities, they face an additional layer of legal risks and governance concerns.
As companies begin to use agentic AI, they should consider key risk management practices to ensure responsible adoption. This includes aligning with emerging best practices and standards being studied and promoted by the National Institute for Standards and Technology (NIST) around agentic AI, including the Center for AI Standards and Innovation (CAISI) AI Agent Standards Initiative and the National Cybersecurity Center of Excellence Project addressing Software and AI Agency Identify and Authorization. For example, organizations utilizing agentic AI should look more closely at how the authority of AI agents is defined, constrained, and supervised, and how actions taken by AI agents are documented, traceable, and attributable. Organizations should also account for how agentic AI use cases may implicate existing obligations and internal controls across areas like cybersecurity, privacy, recordkeeping, and third-party risk management.
This post highlights practical steps organizations can take proactively to address these considerations in deploying agentic AI.
Subscribe to receive the latest updates from Wiley Connect

