OpenAI Forms Oversight Board Led by Sam Altman

As a researcher with a background in artificial intelligence and experience following the developments in the field, I’m closely watching OpenAI’s recent moves with great interest. The formation of a Safety and Security Committee by the board is a positive step towards addressing concerns surrounding the security and safety of their AI models, particularly in light of the departure of key figures like chief scientist Ilya Sutskever and head of superalignment Jan Leike.


Approximately a few weeks following the departure of OpenAI’s leading figure in charge of AI safety and security, the company overhauled its governance structure and instituted a new board committee tasked with evaluating the security and safety aspects of its advanced AI models.

The OpenAI Board establishes a Safety and Security Committee, tasked with providing guidance on crucial safety and security matters for all OpenAI initiatives.

— OpenAI (@OpenAI) May 28, 2024

Two previous board members, Helen Toner and Tasha McCauley, penned critical articles about OpenAI’s leadership in last weekend’s Economist publication.

The newly formed group will evaluate OpenAI’s technology security protocols for a period of three months before releasing a report on their findings. According to a blog post published by OpenAI on Tuesday, the company intends to make public its response to these recommendations in a manner that ensures both safety and transparency.

Recent rapid progress in AI by this private firm has raised concerns about their approach to managing potential risks associated with the technology.

Last autumn, tensions rose between CEO Sam Altman and co-founder and chief scientist Ilya Sutskever over the speed of AI product development and safety measures. As a result, there was a brief power shift in the boardroom during which Sam Altman was temporarily ousted from his position.

As a researcher in this field, I’ve noticed that recent developments have brought up renewed concerns within the industry following the departure of two pivotal figures at OpenAI: Geoffrey Hinton (Sutskever) and Jan Leike. They spearheaded the superalignment team, which focuses on addressing potential threats posed by superintelligent AI. Leike, who chose to leave, mentioned resource constraints as a reason for his decision, an sentiment shared by other departing colleagues.

Following Sutskever’s departure, OpenAI integrated his team into their overarching research projects rather than maintaining it as a distinct entity. Co-founder John Schulman assumed the position of “Head of Alignment Science,” broadening his responsibilities in this area.

The startup has encountered occasional difficulties managing employee departures. Last week, OpenAI removed a clause from its policy preventing ex-employees from retaining their stock options if they publicly criticized the company.

Read More

2024-05-28 21:40