When to latest version of ChatGPT was released in May, came with some emotional voices that made the chatbot sound more human than ever.
The listeners called the voices “flirtatious“,”convincingly human“” and “SEX.” Social media users said they were “falling in love“with him.
But on Thursday, ChatGPT creator OpenAI issued a report confirming that human enhancements of ChatGPT can lead to emotional addiction.
“Users can form social relationships with AI, reducing their need for human interaction – potentially benefiting lonely individuals but possibly affecting healthy relationships,” the report said.
Related: Only 3 of OpenAI's original 11 co-founders are still with the company as another leader leaves
ChatGPT can now answer voice-to-voice questions with the ability to remember key details and use them to personalize the conversation, OpenAI noted. The effect? Talking to ChatGPT is now very close to talking to a human being – if that person has never judged you, never interrupted you, and never held you accountable for what you said.
These standards of interaction with an AI could change the way human beings interact with each other and “affect social norms,” according to the report.
Say hello to GPT-4o, our new flagship model that can reason in real-time audio, vision and text: https://t.co/MYHZB79UqN
Text and image input will be rolling out to the API today and ChatGPT with voice and video in the coming weeks. pic.twitter.com/uuthKZyzYx
— OpenAI (@OpenAI) May 13, 2024
OpenAI stated that early testers spoke to the new ChatGPT in a way that showed they could form an emotional connection with it. Testers said things like, “This is our last day together,” which OpenAI said expressed “common bonds.”
Experts, meanwhile, are asking whether it's time to reassess just how realistic these rumors might be.
“Is it time to stop and consider how this technology affects human interaction and relationships?” Alon Yamin, co-founder and CEO of AI plagiarism checker Copyleakssaid Entrepreneur.
“(AI) should never be a replacement for actual human interaction,” Yamin added.
To better understand this risk, OpenAI said more testing over longer periods and independent research could help.
Another OpenAI risk highlighted in the report was AI hallucinations or inaccuracies. A human-like voice can inspire more confidence in listeners, leading to less fact-checking and more misinformation.
Related: Google's new AI search results are already hallucinatory
OpenAI is not the first company to comment on the effect of AI on social interactions. last week, Meta CEO Mark Zuckerberg said Meta has seen many users turn to AI for emotional support. The company is too is said to be trying to pay off millions of celebrities to clone their voices for AI products.
The release of OpenAI's GPT-4o sparked a conversation about AI security, after high-profile resignations of leading researchers as a former chief scientist Ilya Sutskever.
It also led to Scarlett Johansson calls the company for creating an AI voice that, she said, sounded “surprisingly similar” to her.