π¦ The top six rivals competing with OpenAI
π A guest post from a Venture Capital perspective.
Hey Everyone, welcome back.
There is a new writer on Substack with a considerable background in Venture Capital with a unique perspective, his name is
and he is the writer of the Newsletter Artificial Ignorance. One of my new go-to Newsletters to read on Generative A.I.Jump right into his work.
βοΈ Share to Win Campaign π’
Are you from a π€ π country where American dollars are too high for a π² paid subscription?
Share my articles three times in a Restack on Notes, then, send me a mail with the three links (x3) to the shares on Notes within the Email, and Iβll give you a one (or more) month free paid subscription.
Scroll to the bottom of any of my articles and tap on Restack, to begin. Send the links with your name, country and favorite topic in my A.I. coverage, to michaelkspencer2023@gmail.com to redeem your voucher. Each Restack needs to be a different article, and shared on a different day to be eligible.
If you support what Iβm doing around emerging tech and want the full-coverage.
Letβs get into todayβs topic by the guest contributor. He has a background at Stanford. Click here or double-click on the title to read on the web for the best reading experience, if this is too long for Gmail.
By
April, 2023.In a recent interview (YouTube link), Sam Altman, the CEO of OpenAI, described the companyβs struggles early on.
βWe have been a misunderstood and badly mocked org for a long time. When we started β¦ people thought we were batshit insane,β he said. βWe donβt get mocked as much now.β
1. OpenAI
OpenAI and its flagship product, ChatGPT, have astounded the world. After becoming the fastest-growing consumer product in history1, ChatGPT kicked off an AI arms race and is now in the crosshairs of governments worldwide.Β
But OpenAI is a lot more than ChatGPT. Few people are familiar with the companyβs history or its diverse offering of machine learning models. Even fewer people are familiar with the companyβs competitors. So letβs take a look at OpenAIβs history, its current lead in AI, and the companies working to catch up.
A brief history of OpenAI
OpenAI began in 2015 as a non-profit with several founders, including Sam Altman and Elon Musk. Even before starting OpenAI, both Altman and Musk had publicly expressed concerns about AI. Musk called it humanityβs βbiggest existential threat2,β though he would later leave OpenAI due to βpotential future conflictsβ with Tesla.
In the year that followed, OpenAI released its first two products without much fanfare. But in 2018, the company released a paper3 introducing a new kind of model: a Generative Pre-trained Transformer, or GPT. While not obvious at the time, the GPT model radically changed OpenAIβs trajectory.Β Β
After the GPT paper, OpenAI built several models based on the architecture. With the second release, GPT-2, OpenAI decided to withhold the model code and weights, in contrast to its previous releases. The move surprised the AI community, but it foreshadowed the company's future restrictions. Today, OpenAIβs chief scientist has completely soured on their original research-sharing approach4. βWe were wrong. Flat out, we were wrong,β he said.
Shortly after GPT-2, the organization announced it was creating a "capped-profit 5" company. The new entity would allow investors and employees to earn a βcapped returnβ up to 100x. The creation of the βcapped-profitβ company soon led to a partnership with Microsoft. In 2019, the tech giant announced a $1 billion investment, as well as several joint initiatives. And by 2023, the companies announced a new multiyear, multibillion-dollar investment6.
Fast forward to today: the company has almost two dozen AI models/products across text, images, and audio. It is firmly in the lead in generative AI and is on track to impact hundreds of millions of users. Its partnership with Microsoft has led to AI upgrades for Bing, Github, and Office 365. ChatGPT is the first serious threat to Google's search dominance in years. GPT-4 is so advanced it triggered a call for a pause on continuing AI development. And multiple governments are now scrutinizing the potential impacts of OpenAI and ChatGPT.
A look at OpenAIβs products
Before looking at OpenAIβs competitors, itβs useful to understand OpenAIβs products. At this point, OpenAI has released a wide array of models. Taken together, they are one of the most advanced collections of AI products from a single company. The common thread is language - using text to generate conversation, code, images, and insights.
ChatGPT, GPT-4 & Plugins
ChatGPT is OpenAIβs most successful product to date by a long shot. The product builds on the company's previous large language models, GPT-2 and GPT-3. But ChatGPT is designed for conversation, not general text completion. To achieve its high-quality responses, OpenAI invests significant resources into RLHF (reinforcement learning from human feedback). RLHF involves humans giving the AI feedback on its responses to improve it little by little.Β
The first version of ChatGPT used an internal model named GPT-3.5, more advanced than GPT-3. Shortly after releasing ChatGPT, OpenAI released GPT-4, its most sophisticated LLM to date. While still in beta, GPT-4 is extremely powerful as a language model. It has a much more nuanced understanding of language, scoring in the top 90% percentile on mock bar exams7. GPT-4 is also multi-modal, meaning it can work with image inputs. In an impressive demo, GPT-4 explains a funny image, then builds a working website from a napkin sketch (YouTube). It's still early days, and we're still scratching the surface of what's possible with GPT-4.
The third ChatGPT-related product is the plugin platform, which is currently in alpha. Plugins are akin to App Store apps, which let the chatbot run code, browse the web, and use third-party services. Early use cases include booking flights via Kayak and ordering groceries with Instacart. It's a bit too early to say what will happen with plugins. But if successful, they'll turn ChatGPT into an "everything app," giving it far more abilities than it has today.
GPT-3 & Embeddings
ChatGPT and GPT-4 have stolen the spotlight, but OpenAI has several other text-related products. GPT-3, the previous flagship LLM, is a general text-completion model: start a sentence, and watch GPT-3 finish it for you. That sounds simple now, but GPT-3 produces far more advanced text than any other LLM that came before it. It can also be fine-tuned on specific examples, which isn't possible with ChatGPT and GPT-4.
OpenAI also has an embedding creation model. Embeddings are numerical representations of text - they're a way to measure the "relatedness" of documents. They have several practical use cases, such as search, recommendations, clustering, and classification. Once created, embeddings can be stored in a vector database, such as Pinecone or Weaviate, for further analysis.
DALL-E
DALL-E 2 is a model capable of generating images from a natural language description. While the first version was slow and produced grainy images, V2 has had much more success. At the time, V2's launch was quite impressive in the text-to-image space. But since DALL-E 2's release, competitors like Midjourney and Stable Diffusion have taken the lead on AI image generation.
Whisper
On the audio side, Whisper is a speech recognition model trained on over 600,000 hours of audio data. OpenAI offers APIs for Whisper transcription and translation.
Non-productized models
If this wasnβt enough, OpenAI has released several different models that haven't yet made it into finished products. They include:
Codex: a GPT-3 variant fine-tuned to generate code. Codex powers GitHub Copilot, the coding assistant and autocomplete tool. While Codex used to be available as a product, it is being discontinued by OpenAI.Β
MuseNet: a neural network to compose short musical pieces with up to ten different musical instruments in any genre. It doesnβt βunderstandβ music but rather uses a GPT architecture to predict the next note in a MIDI file.Β
Jukebox: another neural network meant to generate music. In contrast to MuseNet, Jukebox generates music (and vocals) as raw audio data, not MIDI files.Β
That's a lot for one company. But OpenAI is far from alone in developing language models.
Who is competing with OpenAI?
Thereβs no one company that does everything that OpenAI does. While there is plenty of competition for each product, few companies compete with OpenAI across the board.
2. DeepMind
The closest competitor to OpenAI is Google DeepMind8. Founded in 2010, DeepMind is an AI research lab focused on building general-purpose learning algorithms. The company has repeatedly made headlines over the years, including when its AlphaGo AI beat world champion Go player Lee Sedol. Another big release was AlphaFold, a protein-folding AI that has predicted over 200 million protein structures to date.
Besides AlphaGo and AlphaFold, DeepMind has an impressive list of models (though few finished products):
WaveNet: a text-to-speech model. While originally too difficult for consumers, it became WaveRNN, which now powers Google Assistant and GCPβs Cloud Text-to-Speech.
AlphaStar: an AlphaGo sibling that plays Starcraft 2. In 2019 it reached Grandmaster level on the public ladder.
Sparrow: an AI chatbot designed with safety in mind, using a mix of human feedback and Google search suggestions. DeepMind is considering a private beta release sometime in 2023.
Chinchilla: a large language model that supposedly outperforms GPT-3. It is currently in the testing phase, and we don't know too much about it.
The company has as much experience as OpenAI, if not more, with developing practical machine learning models. But a big difference is the types of products that it builds with them - unlike OpenAI, very few are available as consumer products. This likely has something to do with the company's acquisition history - in 2014, Google acquired DeepMind. The team was kept separate but focused on building its models into Google products. But ChatGPT's success triggered the merging of Google Brain and DeepMind and a new focus on consumer products.
Google, as a whole, is a fascinating competitor to OpenAI. On the one hand, the company is an absolute AI powerhouse. It's released thousands of AI research papers, including the Transformer paper that was directly responsible for today's GPT architecture. It has several internal teams working on AI. It has billions in resources (both cash and compute), meaning it can afford to build state-of-the-art models to compete with OpenAI. And it's clear Google is trying to compete - this year, it released Bard, a ChatGPT rival, and has plans to add AI to many of its products.
On the other hand, the company is gun-shy when it comes to releasing AI products to the public. With good reason: the recent history of AI launches has created scar tissue for Google. In 2018, it showcased Google Duplex, a language and voice technology years ahead of its time. People freaked out. And while it did launch eventually, the rollout was slow and limited in scope. Duplex was discontinued in 2022 - 6 months before ChatGPT launched. Despite its advantages, Google execs have declared an internal "code red," as Microsoft and OpenAI now present a serious threat.
Google still represents the biggest competition to OpenAI. On paper, it has everything it needs resource-wise. Time will tell whether the company can overcome its organizational challenges and keep up.
For the best experience, read this post on the Web
3. Anthropic (π½)
One of the most often mentioned OpenAI competitors is Anthropic, despite having only been founded in 2021. It's often cited for two reasons: first, the founders are ex-OpenAI employees who disagreed with OpenAI's approach to AI safety. Second, it built one of the first ChatGPT competitors: Claude.
Claude, which is still in beta, puts a strong emphasis on ethics and safety. The chatbot is trained as a "Constitutional AI," which is a method of helping language models against adversarial prompting. Constitutional AI training involves several rounds of feedback using both humans and AI. In practice, Claude users report that it's less evasive than ChatGPT with difficult questions and can admit when it doesn't know an answer. But it still suffers from hallucinations and is reportedly worse at math and coding.
After Google, Anthropic is OpenAIβs best-funded competitor. The organization has raised over $1 billion to date, including $300 million from Google. And it's not slowing down - it plans to raise as much as $5 billion over the next two years to take on OpenAI and enter over a dozen major industries9. It wants to build a model (codenamed βClaude-Nextβ) ten times more capable than the current state-of-the-art.
4. Cohere (π)
Cohere is an AI research company building language models for companies, not consumers. It was founded in 2019 by AI researchers, including one of the authors of Googleβs Transformer architecture paper. The company has several different products for working with text:
Summarization, to extract insights from text and documents.
Generation, similar to GPT-3, to write ads, blog posts, and product descriptions.
Classification, to group text into categories or identify hate speech.
Embeds, similar to embeddings, to search and cluster input text.
Neural search, to perform semantic search across text content.
Cohere has a diverse, advanced suite of products. Unlike OpenAI, itβs targeting enterprise users and, as such, is focusing on high-performance, secure language models. The company has raised $170 million to date and is in talks to raise additional funding at a $6 billion valuation10. Cohere reportedly generated $2.3 million in revenue in 202111, but that figure is unconfirmed.Β
5. Stability A.I. (π¬π§)
While OpenAI has stopped open-sourcing its models, other companies are continuing that approach. One of the most prominent is Stability AI, which released the image generation model Stable Diffusion. Stable Diffusion is one of the most advanced text-to-image models available, beating DALL-E in image quality. And the model is fully open source, allowing anyone to download and run it locally.Β
Stability AI recently released StableLM, an open-source ChatGPT alternative. StableLM is a family of LLMs, with alpha releases available on GitHub. StableLM is a great step toward accessible language models, but the model still needs refinement and testing.Β
The company is also slated to release video-generation models this year. But it may need more resources to do so: it's "only" raised over $100 million to date12, far less than other rivals.
6. EleutherAI (π)
EleutherAI is a nonprofit AI research lab and perhaps the true heir to the βOpenβ AI name. The organization grew out of a Discord server for researchers and enthusiasts to discuss GPT-3 back in 2020, and the researchers quickly decided to build an open-source alternative to GPT-3. In the years since, they have released several significant open-source datasets and machine-learning models:
The Pile: an 825GB language modeling text dataset, mostly from academic and professional sources.
GPT-Neo: EleutherAIβs first attempt at an open-source GPT-3. GPT-Neo is a series of LLMs trained on the Pile dataset, with 2.7 billion parameters (vs. 175 billion parameters for GPT-3).
GPT-J/GPT-NeoX: additional publicly available model built to approximate GPT-3, with the largest versions using between 6 and 20 billion parameters. Upon release, the models still required fine-tuning, and EleutherAI stated they should not be used for human-facing interactions.Β
In its early days, EleutherAI relied on donations and contributions from cloud platforms in order to build and train its models. However, the group eventually ran into the harsh realities of modern LLM development - the scale of resources needed far outstripped what an amateur group of researchers could assemble. And in 2023, EleutherAI announced its transition to a nonprofit research institute, along with sizable donations from Stability AI, Hugging Face, and others13.
7. Hugging Face
Hugging Face is not the most obvious challenger, but as one of the biggest players in the machine learning space, it's worth a mention. Hugging Face is GitHub for machine learning. It's a platform for hosting, training, fine-tuning and deploying models. There are almost 200K open-source models available and 40K datasets to help train them.
Right now, the company is focused on AI infrastructure, not products. But as open-source models get better, Hugging Face will be well-placed to offer more than just hosting and tooling. And even without its own products, it will help launch the next wave of OpenAI rivals by lowering the barriers to train ML models. To date, Hugging Face has raised $160 million in funding14, and has secured a partnership with AWS15. It's likely that the AWS partnership will deepen - in fact, Hugging Fact would make a great acquisition target for Amazon or Google.
Where we're headed
These are the biggest competitors to OpenAI today - but the AI landscape moves at a breakneck pace. ChatGPT showed the world generative AI's potential and opened a Pandora's box of AI products and companies.
We should expect to see competition continue to heat up, especially from Big Tech. So far, Microsoft and Google are the only ones announcing LLM products. Facebook, despite being an AI research powerhouse, has been noticeably quiet. As have Apple and Amazon, despite owning Siri and Alexa. With AI becoming the "next big thing" in tech, these three definitely have more cards to play.Β
On the startup side, investors are pouring billions into generative AI. It's still early days - we've barely scratched the surface of general models for text, image, and video generation. And more research will lead to more ChatGPT and Stable Diffusion alternatives. Not to mention the thousands of use cases where tailored models will make the most sense, from finance to construction to design. The current Cambrian explosion of AI startups will most likely continue for a while.Β
All that said, OpenAI is in a good spot to maintain its lead. Its biggest threats are from big tech companies, though it has years of experience and momentum at this point. And its partnership with Microsoft makes it unlikely to be outgunned by a new startup. Whether OpenAI gets acquired or becomes a tech giant in its own right remains to be seen. But we're in for a lot more technological progress in the coming months and years.
Editorβs note: (emoji are mine, and some footnotes YouTube videos had to be changed to links.)
References & Footnotes
https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview
https://techcrunch.com/2019/03/11/openai-shifts-from-nonprofit-to-capped-profit-to-attract-capital/
https://www.bloomberg.com/news/articles/2023-01-23/microsoft-makes-multibillion-dollar-investment-in-openai
https://openai.com/research/gpt-4
https://www.deepmind.com/blog/announcing-google-deepmind
https://techcrunch.com/2023/04/06/anthropics-5b-4-year-plan-to-take-on-openai/
https://www.reuters.com/technology/ai-startup-cohere-talks-raise-funding-6-bln-plus-valuation-sources-2023-02-06/
https://getlatka.com/companies/cohere
https://www.theverge.com/2022/10/18/23410435/stability-ai-stable-diffusion-ai-art-generator-funding-round-billion-valuation
https://techcrunch.com/2023/03/02/stability-ai-hugging-face-and-canva-back-new-ai-research-nonprofit/
https://www.crunchbase.com/organization/hugging-face/company_financials
https://huggingface.co/blog/aws-partnership
That was masterfully assembled and succinctly explained. Well done. It is going to be interesting to see down the track what factors weigh most importantly in the success of these tools. Will it just be sheer volume of data? Will it be the elegance and utility of data models? Will it be the learning algorithms sitting atop those data?
Nice writing! Thank you for your summary