AI Supremacy

AI Supremacy

Share this post

AI Supremacy
AI Supremacy
OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place

OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place

Mira Murati's claims, Where's Ilya meme solved, and OpenAI's unethical employee exit practices. Open under the ๐Ÿ”Ž

Michael Spencer's avatar
Michael Spencer
Jun 22, 2024
โˆ™ Paid
43

Share this post

AI Supremacy
AI Supremacy
OpenAI's CTO Says 'Some Creative Jobs Shouldn't have Existed' in the first place
5
9
Share
Source: YouTube screenshot. Mira Murati, making some pretty bewildering statements.

Hey Everyone,

I wanted to circle back to Ilyaโ€™s new company and OpenAIโ€™s conduct and OpenAI in the news.

Mira Murati has been quoted as saying something fairly insulting to anyone in a creative profession. Like a writer, photographer, designer, artist, YouTube creator or videographer.

During a conversation about AI with Dartmouth University Trustee Jeffrey Blackburn and hosted at the universityโ€™s engineering department, for example, OpenAI CTO, Mira Murati said the following:

โ€œMaybe Some Creative Jobs Shouldnโ€™t Existโ€

โ€œSome creative jobs maybe will go away, but maybe they shouldnโ€™t have been there in the first place โ€” if the content that comes out of it is not high quality. I really believe that using it as a tool for education, (and) creativity, will expand our intelligence.โ€ - Mira Murati 

Share

Itโ€™s just so disturbing. Nearly as bad as OpenAI comparing its models to people of various developmental states or ages.

Watch the entire video on YouTube


By the way, she is entirely clueless about the future of work or impact of AI. So why is this a problem? GPT, that is Generative Pre-training Transformer, isnโ€™t remotely turning out to be a general purpose technology (legit GPT).

  • OpenAI isnโ€™t actually replacing jobs, thereโ€™s no sign that this technology will disrupt jobs, but perhaps only be helpful in saving time in only very specific tasks.

  • Creative types arenโ€™t empowered by the ability to create poor quality images, robotic sounding text or half-baked videos with bad physics. Thatโ€™s not what creativity is!

  • Content creators and the internet is due to Generative AI, rather in the opposite direction, now suddenly and increasingly more filled with spam and synthetic non-sense as most of us and the general public can attest to.

  • Content thanks to ChatGPT and chatbots now, are more prone to be used for phishing, for malicious content generation related to cybersecurity, or foreign interference, or misinformation and even for dangerous ideological behavior modification related campaigns with these hallucinating frontier models and tools at play.

OpenAI uses Anthropomorphism bias to try to Hype up the capabilities of their frontier models

  • OpenAI comparing its models to various human developmental stages?

During a conversation about AI with Dartmouth University Trustee Jeffrey Blackburn and hosted at the universityโ€™s engineering department, Mira Murati, OpenAI CTO makes some rather revealing statements about how people inside OpenAI actually think about things and know so little.

Watch Mira Muratiโ€™s Latest Interview (around 2 days ago)

Ilyaโ€™s new AI Safety Company

Ilya Sutskever | Stanford HAI

The mystery of the meme โ€œwhereโ€™s Ilyaโ€ has finally been solved in the summer of 2024.

Whereโ€™s Ilya?

Ilya Sutskever is an acclaimed Israeli-Canadian ML researcher and scientist, and after his superalignment team was disbanded at OpenAI, he has gone and started his own startup.

OpenAI co-founder Ilya Sutskever, who left the company in May, announced his new company on Wednesday June 19th, 2024.

Itโ€™s called Safe Superintelligence Inc., or SSI. His cofounders are former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.

While OpenAI hired Paul M. Nakasone, a retired U.S. Army general and former director of the National Security Agency to its board to bolster its cybersecurity, Ilya is building what amounts to a competitor of his former boss, Sam Altman.

At OpenAI, Sutskever was integral to the companyโ€™s efforts to improve AI safety with the rise of โ€œsuperintelligentโ€ AI system, but nothing seems to have come of his time there thatโ€™s sustainable.

The website of SSI just has a weird announcement page. You can follow the antics of SSI on their X page here.

Sutskever was OpenAIโ€™s chief scientist and co-led the companyโ€™s Superalignment team with Jan Leike, who also left in May to join rival AI firm Anthropic, who actually happen to have a real track record and research on AI alignment.

  • We got more into detail about SSI

  • About OpenAIโ€™s product strategy: acquisition

  • About OpenAIโ€™s pressure tactics to departing employees

  • Compare its trajectory with that of Anthropic

  • Analyzing OpenAI and Appleโ€™s partnerships in Apple Intelligence

  • Question Sam Altmanโ€™s ethical code of conduct, and other tidbits.

Keep reading with a 7-day free trial

Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
ยฉ 2025 Michael Spencer
Privacy โˆ™ Terms โˆ™ Collection notice
Start writingGet the app
Substack is the home for great culture

Share