OpenAI Forms Safety Committee Amid Internal Turmoil

OpenAI has formed a new Safety and Security Committee to guide critical decisions concerning the safety and security of its projects and operations. This move comes as the company embarks on training a new flagship artificial intelligence model, aiming to advance towards artificial general intelligence (AGI). However, the committee’s internal composition and recent high-profile departures have sparked debate about OpenAI’s commitment to AI safety and governance.

The Safety and Security Committee, chaired by Bret Taylor and comprising Adam D’Angelo, Nicole Seligman, Sam Altman, and several technical leaders from within OpenAI, is tasked with evaluating and enhancing the company’s safety protocols over the next 90 days. After this period, the committee will present its findings and recommendations to the full board, followed by a public update on the adopted measures.

The committee includes key figures such as Aleksander Madry (Head of Preparedness), Lilian Weng (Head of Safety Systems), John Schulman (Head of Alignment Science), Matt Knight (Head of Security), and Jakub Pachocki (Chief Scientist). OpenAI has also enlisted external experts like former National Security Agency cybersecurity director Rob Joyce and former Department of Justice official John Carlin to support the committee’s efforts.

The formation of the committee follows the departure of several prominent figures from OpenAI’s safety team, including co-founder Ilya Sutskever and Jan Leike, who led the now-dissolved Superalignment team. These exits have raised concerns about the prioritization of safety within the organization. Critics, including former employees like Daniel Kokotajlo and Gretchen Krueger, have expressed doubts about OpenAI’s commitment to responsible AI development, citing a perceived shift towards profit-driven goals at the expense of safety considerations.

Sutskever and Leike’s departures are particularly significant as they were central to OpenAI’s efforts to ensure AI systems remained aligned with human values. Their resignations, coupled with the dissolution of the Superalignment team, have intensified scrutiny on OpenAI’s safety practices and internal governance.

Despite these internal challenges, OpenAI remains focused on advancing its AI technology. The new committee’s primary task is to scrutinize and improve existing safety measures while preparing for the potential risks associated with more advanced AI models. This includes addressing ethical concerns related to AI deployment, such as bias, job displacement, and the broader societal impacts of AI technologies.

OpenAI’s situation mirrors broader trends in the AI industry, where rapid technological advancements are often accompanied by ethical and safety dilemmas. The formation of the Safety and Security Committee can be seen as part of a growing recognition within the industry that robust safety measures and ethical guidelines are essential to the sustainable development of AI technologies.