Practices for managing agent AI systems


Agentic AI systems—artificial intelligence systems that can pursue complex goals with limited direct supervision—are likely to be widely useful if we can responsibly integrate them into our society. While such systems have considerable potential to help people achieve their goals more efficiently and successfully, they also create risks of harm. In this white paper, we propose a definition of agent AI systems and parties in the life cycle of an agent AI system and highlight the importance of agreeing on a set of basic responsibilities and security best practices for each of these parties. As our primary contribution, we offer an initial set of practices for maintaining the security and accountability of agent operations, which we hope can serve as building blocks in the development of agreed core best practices. We list questions and uncertainties surrounding the operationalization of each of these practices that need to be addressed before such practices can be codified. We then highlight categories of indirect impacts of widespread adoption of agent-based AI systems, which are likely to require additional governance frameworks.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *