OpenAI Resignations: How to prevent AI from going rogue?


How to prevent AI from going rogue?

OpenAI, 80 billion dollars The AI ​​company behind ChatGPT just disbanded the team handling the question — after the two executives responsible for the effort left the company.

The AI ​​security controversy comes less than a week after OpenAI announced a new AI model, GPT-4owith more functionality — and a voice surprisingly similar at Scarlett Johansson's. company stopped releasing that particular sound on Monday.

Connected: Scarlett Johansson 'shocked' OpenAI used voice 'so similar' to her own after already telling company 'no'

Sahil Agarwal, a Yale Ph.D. in applied mathematics who co-founded and currently directs Encrypt AIsaid a startup focused on making AI less risky for businesses entrepreneur that innovation and security are not separate things that need to be balanced, but rather two things that go hand in hand as a company grows.

“You're not stopping innovation from happening when you're trying to make these systems safer and more secure for society,” Agarwal said.

OpenAI Exec raises security concerns

Last week, former OpenAI scientist and co-founder Ilya Sutskever and former OpenAI researcher Jan Leike led. resigned by the AI ​​giant. The two were tasked with leading the superline team, which ensures the AI ​​remains under human control even as its abilities grow.

Connected: OpenAI's chief scientist, co-founder Ilya Sutskever resigns

While Sutskever stated that he was “confident” that OpenAI would build “safe and useful” AI under the leadership of CEO Sam Altman in separation statementLeica said he left because he felt that OpenAI did not prioritize AI security.

“Over the last few months my team has been sailing against the wind,” Leike has written. “Building smarter-than-human machines is an inherently risky endeavor.”

Leica too said that “over the past few years, security culture and processes have taken a back seat to shiny products” at OpenAI and called on the maker of ChatGPT to put security first.

OpenAI has disbanded the superline team led by Leike and Sutskever, the company confirmed Wires Friday.

Sam Altman, CEO of OpenAI. Photo: Dustin Chambers/Bloomberg via Getty Images

Altman and OpenAI president and co-founder Greg Brockman releasing a statement in response to Leike on Saturday, noting that OpenAI has raised awareness of the dangers of AI so the world can prepare for it, and the AI ​​company has deployed secure systems.

How to prevent AI from going rogue?

Agarwal says that while OpenAI tries to make ChatGPT more human-like, the risk isn't necessarily a super-intelligent being.

“Even systems like ChatGPT, they're not implicitly reasoning in any way,” Agarwal said The entrepreneur. “So I don't see the danger as from the perspective of a super-intelligent artificial being.”

The problem is that as AI becomes more powerful and multifaceted, the potential for more implicit bias and toxic content increases, and AI becomes more dangerous to implement, he explained. By adding more ways to interact with ChatGPT, from image to video, OpenAI has to think about security from more angles.

Connected: OpenAI Releases a New AI-powered Chatbot, GPT-4o

Agarwal's company released a safety dashboard earlier this month that ranks the safety and security of AI models from Google, Anthropic, Cohere, OpenAI and more.

They found that the new GPT-4o model potentially contains more bias than the previous model and may produce more toxic content than the previous model.

“What ChatGPT did is it made AI real for everyone,” Agarwal said.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *