Tech

OpenAI, Makers of ChatGPT, Announces Safety and Security Commission: All the Details You Need to Know


OpenAI, the maker of ChatGPT, has begun the training process for its next-generation AI model, GPT-5. As the AI ​​model training process begins, the company on Tuesday announced the formation of a new Safety and Security Committee, comprised of key board members. OpenAI recently announced the dissolution of the Superalignment group established to address long-term AI risks. However, now the new committee will operate similarly as it will review safety and security decisions for new projects and activities.

Introducing the Safety and Security Committee and its members

On Tuesday, OpenAI shared a blog parcel announced the formation of a new Safety and Security Committee led by directors Bret Taylor (Chairman), Adam D’Angelo, Nicole Seligman and Sam Altman (CEO). OpenAI said it is responsible for making recommendations to the company’s board of directors on “key safety and security decisions for all OpenAI projects.”

Additionally, the committee will include OpenAI technical and policy experts such as Aleksander Madry, John Schulman (Head of Security Systems), Matt Knight (Head of Security) and Jakub Pachocki (Chief Scientist) . Members will closely monitor and test the company’s plans and will develop procedures and safeguards over 90 days.

Why is there a Safety and Security Committee?

OpenAI’s new safety committee will scrutinize the company’s new projects and activities to provide safety protocols for the ethical use of its tools and technology. The company also emphasized that it is aiming for the next level of AGI capabilities and that it wants to focus on both safety and technological advancement. OpenAI said, “While we are proud to build and release models that lead the industry in both capability and safety, we welcome a robust debate at this critical time.”

Within 90 days, OpenAI’s Safety and Security Committee will present recommendations and processes for managing safety and security in their projects. This is an important step for OpenAI as Wired’s report highlights that after the Superalignment team dissolved, the company’s safety and security measures have taken a backseat. On the other hand, AI researchers also highlight major concerns about upcoming AI capabilities, which require great care when it comes to protecting the technology and its ethical use.

One more thing! We are now on WhatsApp Channel! Follow us there so you never miss any updates from the world of technology. ‎To follow HT Tech channel on WhatsApp, click This to join now!

news7g

News7g: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button