AI Supremacy

AI Supremacy

Share this post

AI Supremacy
AI Supremacy
OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place
Copy link
Facebook
Email
Notes
More

OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place

Mira Murati's claims, Where's Ilya meme solved, and OpenAI's unethical employee exit practices. Open under the 🔎

Michael Spencer's avatar
Michael Spencer
Jun 22, 2024
∙ Paid
43

Share this post

AI Supremacy
AI Supremacy
OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place
Copy link
Facebook
Email
Notes
More
5
9
Share
Source: YouTube screenshot. Mira Murati, making some pretty bewildering statements.

Hey Everyone,

I wanted to circle back to Ilya’s new company and OpenAI’s conduct and OpenAI in the news.

Mira Murati has been quoted as saying something fairly insulting to anyone in a creative profession. Like a writer, photographer, designer, artist, YouTube creator or videographer.

During a conversation about AI with Dartmouth University Trustee Jeffrey Blackburn and hosted at the university’s engineering department, for example, OpenAI CTO, Mira Murati said the following:

“Maybe Some Creative Jobs Shouldn’t Exist”

“Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place — if the content that comes out of it is not high quality. I really believe that using it as a tool for education, (and) creativity, will expand our intelligence.” - Mira Murati 

Share

It’s just so disturbing. Nearly as bad as OpenAI comparing its models to people of various developmental states or ages.

Watch the entire video on YouTube


By the way, she is entirely clueless about the future of work or impact of AI. So why is this a problem? GPT, that is Generative Pre-training Transformer, isn’t remotely turning out to be a general purpose technology (legit GPT).

  • OpenAI isn’t actually replacing jobs, there’s no sign that this technology will disrupt jobs, but perhaps only be helpful in saving time in only very specific tasks.

  • Creative types aren’t empowered by the ability to create poor quality images, robotic sounding text or half-baked videos with bad physics. That’s not what creativity is!

  • Content creators and the internet is due to Generative AI, rather in the opposite direction, now suddenly and increasingly more filled with spam and synthetic non-sense as most of us and the general public can attest to.

  • Content thanks to ChatGPT and chatbots now, are more prone to be used for phishing, for malicious content generation related to cybersecurity, or foreign interference, or misinformation and even for dangerous ideological behavior modification related campaigns with these hallucinating frontier models and tools at play.

OpenAI uses Anthropomorphism bias to try to Hype up the capabilities of their frontier models

  • OpenAI comparing its models to various human developmental stages?

During a conversation about AI with Dartmouth University Trustee Jeffrey Blackburn and hosted at the university’s engineering department, Mira Murati, OpenAI CTO makes some rather revealing statements about how people inside OpenAI actually think about things and know so little.

Watch Mira Murati’s Latest Interview (around 2 days ago)

Ilya’s new AI Safety Company

Ilya Sutskever | Stanford HAI

The mystery of the meme “where’s Ilya” has finally been solved in the summer of 2024.

Where’s Ilya?

Ilya Sutskever is an acclaimed Israeli-Canadian ML researcher and scientist, and after his superalignment team was disbanded at OpenAI, he has gone and started his own startup.

OpenAI co-founder Ilya Sutskever, who left the company in May, announced his new company on Wednesday June 19th, 2024.

It’s called Safe Superintelligence Inc., or SSI. His cofounders are former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.

While OpenAI hired Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency to its board to bolster its cybersecurity, Ilya is building what amounts to a competitor of his former boss, Sam Altman.

At OpenAI, Sutskever was integral to the company’s efforts to improve AI safety with the rise of “superintelligent” AI system, but nothing seems to have come of his time there that’s sustainable.

The website of SSI just has a weird announcement page. You can follow the antics of SSI on their X page here.

Sutskever was OpenAI’s chief scientist and co-led the company’s Superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic, who actually happen to have a real track record and research on AI alignment.

  • We got more into detail about SSI

  • About OpenAI’s product strategy: acquisition

  • About OpenAI’s pressure tactics to departing employees

  • Compare its trajectory with that of Anthropic

  • Analyzing OpenAI and Apple’s partnerships in Apple Intelligence

  • Question Sam Altman’s ethical code of conduct, and other tidbits.

Keep reading with a 7-day free trial

Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Michael Spencer
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More