Hey Everyone,
Is AI becoming too centralized? Even some people working in the industry are making warnings.
“People should be concerned that only a small number of entities can build biases into AI systems. If we have open-source base models that anyone or any group can fine tune, then we will have a wide diversity of biases for people to choose from.” - Yann LeCun - VP & Chief AI Scientist at Meta
Some of us in the community and in the AI news curation watchers are also getting worried about the centralization occuring in BigTech and in particular the Generative AI movement.
I’m speaking of OpenAI’s reorganization here especially. Friday's announcement that OpenAI CEO Sam Altman will return to the nonprofit's board locks Silicon Valley's billionaire class into control of the destiny of society-transforming artificial intelligence. How in the world is that for the betterment of humanity?
In an age where Generative AI could have been about the democratization of AI and the proliferation of open-source models, it’s about the most powerful companies getting even more powerful, richer and more influential in the future of technology for the world. I’m speaking mostly about U.S. based tech companies and their powerful advertising, cloud and software services monopolies.
It’s also depicting a United States that is becoming more monolithic and unipolar shaping a technology they might not fully understand in terms of historical consequences. But will consumers and global citizens begin to rebel against these monopoly corporations and their subsidiaries?
Who gets to decide the future of AI?
Where did it all go wrong?
Altman will be rejoining the company’s board of directors. The members of the ‘transitionary board’ — the board formed after Altman’s firing in November — won’t be stepping down with the appointment of Desmond-Hellmann, Seligman and Simo. In fact the law firm that looked into Sam Altman’s firing has a surprisingly short message (vetted by OpenAI’s sprawling PR/Comms team I am sure):
“We have unanimously concluded that Sam and [OpenAI president Greg Brockman] are the right leaders for OpenAI,” Taylor said in a statement. “We recognize the magnitude of our role in stewarding transformative technologies for the global good.”
Not all at OpenAI would likely agree. But most of them do, because he (Sam Altman) allows them to get very wealthy.
After reportedly interviewing dozens of people and reviewing over 30,000 documents, WilmerHale found that while the prior board acted within its purview, Altman's termination was unwarranted. "WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman," OpenAI wrote, "but also found that his conduct did not mandate removal."
The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman's abrupt removal from the board and his termination as CEO on November 17, 2023.
So if both Ilya and Mira had (have?) concerns about Sam, who might lead OpenAI as a better aligned CEO? Apparently the answer is of course, Sam Altman.
OpenAI after a lawsuit by Elon Musk then decided to share private Emails of the company with Elon Musk, it’s all a bit disturbing what OpenAI chooses to be “open” about.
The Not for Profit Fraud
OpenAI recently publically aired and included several emails with Musk in its post, including one in which the company's chief scientist Ilya Sutskever states that “it's totally OK to not share the science” behind their AI since open-sourcing the technology could allow it to fall into unscrupulous hands. Musk replies in an email, “Yup.”
It sounds like Ilya, Sam and Elon were more aligned in some ways. Elon Musk in those letters sounds paranoid about Google DeepMind. Now he is paranoid about OpenAI, the company he helped himself create? A world where OpenAI is clearly not actually a non-profit firm. What could possibly go wrong?
But what does GPT-5 become and who stands the most to gain?
Is AGI just a way to manipulate the public? What is really going on.
OpenAI’s role needs to be more fully explored in this centralized commercialization of LLMs currently unfolding.
Here’s a video that has got me thinking.
Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast
The quote above was taken from Yann’s recent time speaking with Lex. Worth listening to: Source:
“Future systems will *have* to use a different architecture capable of understanding the world, capable of reasoning, and capable of planning so as to satisfy a set of objectives and guardrails.” - Yann LeCun - source.
Follow Yann on X.
Follow Yann on LinkedIn.
For access to deep dives and to support my coverage across ten publications related to emerging tech.
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.