As always, with ChatGPT you control your data. Your conversations with GPTs are not shared with builders. If GPT uses a third-party API, you choose whether data can be sent to that API. When builders customize their own GPT with actions or knowledge, the builder can choose whether user conversations with that GPT can be used to improve and train our models. These choices build on the existing ones privacy controls users have, including the option to exclude the entire account from model training.
We have set up new systems to help view GPTs against ours usage policy. These systems build on our existing mitigations and aim to prevent users from sharing harmful GPTs, including those that include fraudulent activity, hateful content or adult themes. We've also taken steps to build user trust by allowing builders to verify their identity. We will continue to monitor and learn how people use GPT and update and strengthen our security mitigations. If you have a concern about a specific GPT, you can also use our reporting feature on the GPT share page to let our team know.
GPTs will continue to get more useful and smarter, and eventually you'll be able to let them take over real-world tasks. In the field of artificial intelligence, these systems are often referred to as “agents”. We think it's important to move toward this future gradually, because it will require careful technical and security workâand time for society to adapt. We've thought deeply about the social implications and will have more analysis to share soon.