What is Q-Star of OpenAI?
Hey Everyone,
It’s the evening of November 22nd, 2023 American Thanksgiving eve. This article has been updated since first going live on A.I. Supremacy as a web-only article. I think it’s important we point out a few things around OpenAI’s most tumultuous week of 2023.
It’s gotten a bit confusing trying to crack and track the difference between the VC-backed AI Accelerationists vs. the Safetyists (including the two female OpenAI board members who were terminated).
And everyone has been asking, why was Sam Altman fired in the first place? What could have led to such a debate, conflict and outcome?
According to Reuters, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told the outlet.
The battle between Open-source advocates and closed-source commercialists is already bad enough, but what if GPT-5 had elements of Q-star that actually are dangerous?
According to one of the sources (Reuters), long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board’s actions.
The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Of course we don’t quite know what this Q* refers to exactly and this may only be a rumor. But it seems as if the hype over AGI is creating some trepidation among some. The lack of female professionals on Silicon Valley boards is of course nothing new. However, in terms of inclusion and risk aversion, it may be a good idea to have women knowledgeable about AI directly, on such a board if you really are aiming for AGI, so it’s perplexing why the two female OpenAI AI research heavy board members were fired.
Unfortunately it seems OpenAI’s investors would have to green light female board members, considering what occurred with Adam and the last two who were fired. It’s not clear what appreciation of X-risk Adam D'Angelo has but that Poe is a competitor to ChatGPT remains fairly certain.
Sam Altman’s Failed Relationship with Board Member
I think we can characterize Sam Altman’s relationship with Helen Toner as a pretty significant failure on his part. It appears as if Sam Altman began to be dismissive of the board. The firing was the culmination of months of tension where Sam Altman was more interested in OpenAI’s image than real trust and safety of the systems they are building.
After all, these people were picked by Sam Altman himself for the most part to serve on the board. Helen Toner has been vilified on X and the internet since the firing of Sam Altman, but that’s only one side of the story.
“She comes across as extremely well informed, extremely sharp, very trustworthy, reliable, and with a good sense of the responsibilities that those of us within AI must necessarily uphold,” said Michael Osborne, a Oxford University machine-learning professor who knew Toner and had collaborated on a paper with her.
According to multiple outlets, it appears Toner’s research work on AI safety at CSET might have been the catalyst that led to the past week of turmoil at Open AI. Altman had previously said in the statement at the time that Toner “brings an understanding of the global AI landscape with an emphasis on safety, which is critical for our efforts and mission.”
She joined the company after a career studying AI and the relationship between the United States and China. The above paper led to a bizarre dispute:
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.