What the Heck Is Going On At OpenAI?

What the Heck Is Going On At OpenAI?

As a seasoned gamer who has witnessed the rise and fall of countless virtual kingdoms, I can’t help but feel a sense of deja vu reading about OpenAI‘s transformation from a noble nonprofit to a for-profit entity. The tales of ambitious leaders pushing boundaries and the inevitable consequences that follow are all too familiar in both the realms of Silicon Valley and my gaming world.


In Silicon Valley, nothing seems as unremarkable as continued success, but a consistent stream of departures definitely turns heads.

The departure of Mira Murati, OpenAI’s Chief Technology Officer, on September 25 has sparked speculation in Silicon Valley that things may not be running smoothly at Altmanland, given reports she left due to growing frustrated with attempts to reform or moderate the company from within. Notably, Murati was not the only high-profile figure to exit the successful tech firm; researcher Barret Zoph and chief research officer Bob McGrew also departed. As of now, it remains unclear where they are headed next.

The play delves deeply into personal experiences and broader philosophical questions, shedding light on how the era of artificial intelligence may unfold.

The situation began in November when concerns about Sam Altman’s reported unconventional leadership approach and safety doubts surrounding a clandestine project known as Q* (later renamed Strawberry and publicly launched as o1 last month) led certain board members to attempt removing the co-founder from his position. However, their efforts were short-lived, as Altman managed to reclaim control of his popular AI company within a few days. A significant factor in this reversal was the reluctance of Satya Nadella’s Microsoft, which holds a 49% stake in OpenAI, to allow Altman to step down.

The board was shuffled to be more Altman-friendly and several directors who opposed him were forced out. A top executive wary of his motives, OpenAI co-founder and chief science officer Ilya Sutskever, would also eventually leave. Sutskever himself was concerned with Altman’s “accelerationism” — the idea of pushing ahead on AI development at any cost. Sutskever exited in May,  though a person who knows him tells The Hollywood Reporter he had effectively stopped being involved with the firm after the failed November coup. (Sutskever more than landed on his feet — he just raised $1 billion for a new AI safety company.)

As an ardent follower, I was part of a “superalignment” team led by Sutskever and Jan Leike, tasked with predicting potential hazards and ensuring safety. Just like Sutskever, Leike departed at the same time, marking the end of our team. Similar to other team members, Leike has since chosen to join Anthropic, a company often perceived as more vigilant about safety, much like OpenAI’s competitor.

Murati, McGrew and Zoph have become the most recent examples following concerns about safety within AI development. Safety in this context refers to potential immediate issues like unintended biases and long-term dangers such as scenarios resembling Skynet, which require thorough examination before implementation. Given that artificial general intelligence (AGI), the capacity for machines to solve problems at a human level, may be achieved within 1-2 years, these safety concerns are of significant importance.

However, unlike Sutskever, following the November drama, Murati opted to remain at the company in an attempt to counterbalance Altman and president Greg Brockman’s rapid advancement initiatives from within, as suggested by a source close to OpenAI who preferred to remain anonymous as they were not allowed to discuss the matter publicly.

It’s uncertain what exactly pushed Murati to make her decision, but the launch of product o1 last month might have played a role. This innovative tool is designed not just to process information like many existing large language models do (such as rewriting the Gettysburg address as a Taylor Swift song), but also to solve mathematical and coding problems in a human-like manner. Those advocating for AI safety have emphasized the need for thorough testing and safeguards before such products are made available to the general public.

The glitzy product launch coincides with, and to some extent follows from, OpenAI’s transformation into a profit-driven corporation. This change, which includes no longer having nonprofit oversight and a CEO like Altman who holds equity like other founders, has raised concerns among departing executives such as Murati.

Murati said in an X post that “this moment feels right” to step away.

The magnitude of concerns has escalated to such an extent that some former employees are raising alarms in high-profile public arenas. For instance, last month, William Saunders, a former technical staff member at OpenAI, appeared before the Senate Judiciary Committee and stated that he departed from the company because he perceived catastrophic consequences if OpenAI continues on its current trajectory.

He explained to legislators that Artificial General Intelligence (AGI) could lead to profound shifts within society, impacting the economy and employment dramatically. Additionally, there’s a potential risk of catastrophic damage due to AGI systems autonomously executing cyberattacks or aiding in the development of dangerous biological weapons. However, no one has found a way to guarantee the safety and control of these advanced AI systems. OpenAI may claim they are making progress, but those who have resigned express doubts about their readiness in time.

Established as a non-profit organization in 2015, OpenAI made it clear that they would collaborate freely with various institutions and engage with companies for researching and implementing new technologies. However, until recently, the company had been governed by the board of its non-profit foundation. The recent decision to relinquish non-profit oversight provides the company with increased autonomy and motivation to quickly develop new products, while also making it more attractive to potential investors.

And investment is crucial: a New York Times report found that OpenAI could lose $5 billion this year. (The cost of both chips and the power needed to run them are extremely high.) On Wednesday the company announced a fresh round of capital from parties including Microsoft and chipmaker Nvidia totaling some $6.6 billion.

OpenAI faces the need to strike expensive agreements with publishers due to ongoing legal issues, like those with the Times, which limit its freedom to use these publishers’ content for training its models without restriction.

Industry observers are finding themselves apprehensive about OpenAI’s actions. As the organization transitioned into a profit-driven entity, it reinforced what many had suspected: much of the conversation around safety may have been mere rhetoric. Gary Marcus, an experienced AI expert and author of the recently published book Taming Silicon Valley: How We Can Ensure That AI Works for Us, shares this sentiment with THR. He emphasizes that OpenAI’s primary focus is on generating revenue, without any regulatory oversight to guarantee its safety.

OpenAI tends to unveil products ahead of what the industry perceives as their optimal readiness. The introduction of ChatGPT in November 2022 took the tech world by surprise; competitors at Google who were developing a similar product believed that the latest Language Learning Models (LLMs) weren’t quite ready for the main stage yet.

It’s uncertain if OpenAI can continue innovating as rapidly following the recent talent drain, since that depends on their ability to adapt and find new solutions.

Maybe to divert attention from the ongoing turmoil and convince skeptics, Altman recently published a personal blog post suggesting that “superintelligence” – the concept that machines could surpass human abilities to an extreme degree – might materialize as early as the 2030’s. He stated, “Amazing accomplishments such as solving climate change, establishing a space colony, and unveiling all of physics will eventually become routine.” Interestingly, it was possibly this type of discussion that led Sutskever and Murati to resign.

Read More

Sorry. No data so far.

2024-10-04 20:25