adoption of safety by design principles


OpenAI, along with industry leaders including Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI, and Stability AI, is committed to implementing robust child safety measures in the development, deployment, and maintenance of generative AI technologies as articulated in the Principles Safety by Design. This initiative, which he leads Thornnon-profit organization dedicated to the defense of children from sexual abuse, i All technology is human, an organization dedicated to solving the complex problems of technology and society, aims to mitigate the risks that generative artificial intelligence poses to children. By adopting the overarching principles of Safety by Design, OpenAI and our colleagues ensure that children's safety is a priority at every stage of AI development. To date, we have made significant efforts to reduce the potential for our models to generate content that is harmful to children, have set age restrictions for ChatGPT, and actively engaged with the National Center for Missing and Exploited Children (NCMEC), the Tech Coalition, and other government and industry stakeholders on child protection issues and improvements to reporting mechanisms.

As part of this Safety by Design effort, we commit to:

  1. Develop: Develop, build and train generative AI models that proactively address child safety risks.

    • Responsibly locate our training data sets, detect and remove child sexual abuse material (CSAM) and child sexual exploitation material (CSEM) from training data, and report any confirmed CSAM to relevant authorities.

    • Incorporate feedback loops and iterative stress testing strategies into our development process.

    • Implement solutions to address adversarial abuse.
  2. Arrangements: Publish and distribute generative AI models after they've been trained and rated for child safety, providing protection throughout the process.

    • Combat and respond to offensive content and behavior and include preventative efforts.

    • Encourage developer ownership in secure design.
  3. Maintain: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.

    • We are committed to removing new AIG-CSAM generated by bad actors from our platform.

    • Invest in research and future technological solutions.
    • Fight against CSAM, AIG-CSAM and CSEM on our platforms.

This commitment marks an important step in preventing the misuse of AI technologies to create or disseminate child sexual abuse material (AIG-CSAM) and other forms of child sexual harm. As part of the working group, we also agreed to publish progress updates annually.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *