OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place
Mira Murati's claims, Where's Ilya meme solved, and OpenAI's unethical employee exit practices. Open under the đ
Source: YouTube screenshot. Mira Murati, making some pretty bewildering statements.
Hey Everyone,
I wanted to circle back to Ilyaâs new company and OpenAIâs conduct and OpenAI in the news.
Mira Murati has been quoted as saying something fairly insulting to anyone in a creative profession. Like a writer, photographer, designer, artist, YouTube creator or videographer.
During a conversation about AI with Dartmouth University Trustee Jeffrey Blackburn and hosted at the universityâs engineering department, for example, OpenAI CTO, Mira Murati said the following:
âMaybe Some Creative Jobs Shouldnât Existâ
âSome creative jobs maybe will go away, but maybe they shouldnât have been there in the first place â if the content that comes out of it is not high quality. I really believe that using it as a tool for education, (and) creativity, will expand our intelligence.â - Mira Murati
Itâs just so disturbing. Nearly as bad as OpenAI comparing its models to people of various developmental states or ages.
By the way, she is entirely clueless about the future of work or impact of AI. So why is this a problem? GPT, that is Generative Pre-training Transformer, isnât remotely turning out to be a general purpose technology (legit GPT).
OpenAI isnât actually replacing jobs, thereâs no sign that this technology will disrupt jobs, but perhaps only be helpful in saving time in only very specific tasks.
Creative types arenât empowered by the ability to create poor quality images, robotic sounding text or half-baked videos with bad physics. Thatâs not what creativity is!
Content creators and the internet is due to Generative AI, rather in the opposite direction, now suddenly and increasingly more filled with spam and synthetic non-sense as most of us and the general public can attest to.
Content thanks to ChatGPT and chatbots now, are more prone to be used for phishing, for malicious content generation related to cybersecurity, or foreign interference, or misinformation and even for dangerous ideological behavior modification related campaigns with these hallucinating frontier models and tools at play.
OpenAI uses Anthropomorphism bias to try to Hype up the capabilities of their frontier models
OpenAI comparing its models to various human developmental stages?
During a conversation about AI with Dartmouth University Trustee Jeffrey Blackburn and hosted at the universityâs engineering department, Mira Murati, OpenAI CTO makes some rather revealing statements about how people inside OpenAI actually think about things and know so little.
Watch Mira Muratiâs Latest Interview (around 2 days ago)
Ilyaâs new AI Safety Company
The mystery of the meme âwhereâs Ilyaâ has finally been solved in the summer of 2024.
Whereâs Ilya?
Ilya Sutskever is an acclaimed Israeli-Canadian ML researcher and scientist, and after his superalignment team was disbanded at OpenAI, he has gone and started his own startup.
OpenAI co-founder Ilya Sutskever, who left the company in May, announced his new company on Wednesday June 19th, 2024.
Itâs called Safe Superintelligence Inc., or SSI. His cofounders are former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.
While OpenAI hired Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency to its board to bolster its cybersecurity, Ilya is building what amounts to a competitor of his former boss, Sam Altman.
At OpenAI, Sutskever was integral to the companyâs efforts to improve AI safety with the rise of âsuperintelligentâ AI system, but nothing seems to have come of his time there thatâs sustainable.
The website of SSI just has a weird announcement page. You can follow the antics of SSI on their X page here.
Sutskever was OpenAIâs chief scientist and co-led the companyâs Superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic, who actually happen to have a real track record and research on AI alignment.
We got more into detail about SSI
About OpenAIâs product strategy: acquisition
About OpenAIâs pressure tactics to departing employees
Compare its trajectory with that of Anthropic
Analyzing OpenAI and Appleâs partnerships in Apple Intelligence
Question Sam Altmanâs ethical code of conduct, and other tidbits.
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.