Emergent Abilities of Language Models
AI scientists are studying the “emergent” abilities of large language models
Hey Guys,
This is AI Supremacy Premium.
Recent we’ve seen some hype around the performance of larger fine-tuned language models. In recent years, scaling up the size of language models has been shown to be a reliable way to improve performance on a range of natural language processing (NLP) tasks.
I’ve been going crazy for Stable Diffusion, Midjourney and DALL-E 2 in terms of ease of text-to-image art, landscape and creative generation. Indeed, large language models (LLMs) have become the center of attention and hype because of their seemingly magical abilities to produce long stretches of coherent text, do things they weren’t trained on, and engage (to some extent) in topics of conversation that were thought to be off-limits for computers.
A LinkedIn News story has the ridiculous headline, AI is Getting Good and Fast. I think what the editor means is the emergent abilities of language models. He points to some vague NYT article.
But what are we actually talking about? Even as OpenAI’s dalle2 becomes nearly obsolete with the rise of Stable Diffusion that is open-source, the world of LLMs moves forward quickly.
The topic is fairly interesting with quite a few papers on the subject.
You are reading AI Supremacy, one of the fastest growing AI Newsletters born in 2022 on Substack. You can consider upgrading for access to more articles per month and access to locked archive posts.
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.