Agentic AI methods—AI methods that may pursue advanced targets with restricted direct supervision—are prone to be broadly helpful if we are able to combine them responsibly into our society. Whereas such methods have substantial potential to assist folks extra effectively and successfully obtain their very own targets, additionally they create dangers of hurt. On this white paper, we propose a definition of agentic AI methods and the events within the agentic AI system life-cycle, and spotlight the significance of agreeing on a set of baseline duties and security finest practices for every of those events. As our main contribution, we provide an preliminary set of practices for holding brokers’ operations secure and accountable, which we hope can function constructing blocks within the growth of agreed baseline finest practices. We enumerate the questions and uncertainties round operationalizing every of those practices that have to be addressed earlier than such practices will be codified. We then spotlight classes of oblique impacts from the wide-scale adoption of agentic AI methods, that are prone to necessitate extra governance frameworks.