We are aware that creating speech that resembles the voices of the people carries serious risks, which are especially important in an election year. We collaborate with US and international partners from across government, media, entertainment, education, civil society and beyond to ensure we incorporate their feedback as we build.
The partners who are testing the Voice Engine today have agreed to ours usage policy, which prohibit impersonating another person or organization without consent or legal authority. In addition, our terms with these partners require the express and informed consent of the native speaker, and we do not allow developers to build in ways for individual users to create their own voices. Partners must also clearly disclose to their audience that the voices they hear are generated by artificial intelligence. Finally, we have implemented a set of security measures, including watermarking to track the origin of every sound generated by the Voice Engine, as well as proactively monitoring how it is used.
We believe that any widespread deployment of synthetic voice technology should be accompanied by voice authentication experiences that verify that the original speaker is knowingly adding their voice to the service, and a voice ban list that detects and prevents the creation of voices that are too similar to featured personalities.