New OpenAI committee assures robust security for future AI projects as GPT-5 launch nears

OpenAI has announced the launch of a Safety and Security Committee to offer expert guidance to the artificial intelligence (AI) company on the usability of incoming products.

In a statement, OpenAI said the new committee will be the primary body in charge of making recommendations to the entire board on safety decisions following the shuttering of the super alignment team. The committee, comprised of company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman, has been saddled with crafting a safety blueprint for the company.

The newly minted committee is expected to evaluate and report on existing company safety guidelines within 30 days, sharing recommendations with the Board. Rather than adopting a close-ended approach, the committee’s recommendations will be shared with the public for input from consumers and other interested parties.

OpenAI’s Chief Scientist Jakub Pachocki, Head of Security Matt Knight, Head of Safety Systems Lilian Weng, Head of Preparedness Aleksander Madry, and Head of Alignment Science John Schulman will strengthen the committee. The company also plans to add a raft of technical experts to the team as it looks beyond in-house options.

While the safety committee ties its bootlaces for the uphill task ahead, the technical arm of the company is inching forward to the launch of a new AI model to replace GPT-4. The new iteration, widely touted to be the fifth installment in the series, is expected to feature a voice mode for digital assistant functionalities.

Currently, OpenAI is yet to release a timeline for the launch of a successor to GPT-4, but recent releases by Google (NASDAQ: GOOGL), Meta (NASDAQ: META) and Anthropic could force the hand of the company for a commercial release in the coming months.

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” read the statement. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

Although OpenAI says it will be transparent in the making of the new frontier model, the company has yet to disclose any information on a timeline, training data, or integration with other emerging technologies.

Jumping ship in the eye of the storm

The talk of a new safety committee comes on the heels of former OpenAI researcher Jan Leike’s resignation following issues over the company’s safety policy. Leike argued that the company relegated safety to the backseat in favor of its shiny products with Altman conceding that OpenAI has “a lot more to do.”

The company has been hit by a string of high-profile resignations in recent weeks, including the exits of several core team members including ex-Chief Scientist Ilya Sutskever. Barely a year after floating a Superalignment team with Sutskever and Leike, OpenAI shuttered the project in a move casting doubt over the company’s stance on safety.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

YouTube video

New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.

Originally Appeared Here

Author: Rayne Chancer