16 Comments
Mar 18Liked by Michael Spencer, Valentino Zocca ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡ช๐Ÿ‡บ

This is a very nuanced dissection of current large language models. Indeed, an LLM lacks world models, and does not know what it is talking about.

The interesting question is how much better can LLM gets if an agent, using LLM and maybe other kinds of approaches, can look up supporting evidence for its assertions in existing documents, can run tools for verification, can run what-if scenarios in simulators and reflect on results.

It would still be a mechanical process of imitation and distillation, but there would be grounding. There would be a closed loop, with the effects being passed back to the chatbot, which could then decide what to do next if it missed the mark.

Expand full comment
author

I agree and the 2025 to 2030 period will get a lot mor agency along those lines I reckon. For whatever the technological singularity is supposed to mean for commercial tools, we are at least making some progress.

Expand full comment

My job is to provide very hard answers to appallingly simple questions.

A nice synopsis of the state of the LLM art. AGI does not worry me. SkyNet will not be what we think it will be. And I should know, having spent time with one of the Terminators.

Expand full comment

I wonder how well some of our animal cousins would fare against some of these definitions of intelligence. Since we have no idea of their inner workings, we can only evaluate them based on outward observed behavior. In that regard, I agree that intelligence appears to be a continuum. Is an LLM "more intelligent" than a rock, but less intelligent than an octopus or a bird? I guess only time will tell.

Expand full comment
author

Not sure they can be compared as we do not really have a good working and measurable definition of intelligence. And no animal understands or can speak our language.

Expand full comment

Exactly my point - If we don't have a good working and measurable definition of intelligence, how can we define, identify and measure AGI for intelligent machines?

Expand full comment
author

Not having a measurable definition of intelligence does not mean we do not have a general understanding. They are separate concepts. We have an understanding that humans are more intelligent than dogs based on our greater ability to solve problems, but we cannot say how much. In addition LLMs have the ability to "understand" language, but not the world, animals have possibly a better understanding of the world but no understanding of language, making a comparison even harder. Humans understand both, and can be generally compared to both, but, again, it is more of a qualitative comparison than quantitative.

Expand full comment
Aug 15, 2023Liked by Valentino Zocca ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡ช๐Ÿ‡บ

Impressive article. But whenever anyone predicts something I'm the time range of over 50 years, at that point, anyone's guess is just as accurate.

And I've heard wilder guestimates than a humble half century.

Expand full comment

โœ”๏ธโœ”๏ธ๐Ÿ•—We've reasonably already accomplished a degree of AGI in ChatGPT.

Despite it's flaws, reasonably no single human is as smart as ChatGPT across a wide variety of tasks. (Granted for eg, there are several PHD etc level humans smarter than GPT in isolated fields or groups of fields, but still, none reasonably exceed GPT in a wide variety)

As for when these systems will demonstrate human autonomy, is reasonably unknown

Expand full comment

Valentino (or Mike), how do you think about the concept that LLMs mean that many improvements in one field are now also improvements in another field? This is a bit of a tangent from the main message, but not by much, and it's fun to consider. Programming language breakthroughs matter a lot more now. By how much does this shorten the potential time line?

I know this is pretty speculative, but maybe someone can pick up the ball and run with it.

Expand full comment
author

I do think there is a bit of a compounding effect as LLMs improve things like coding productivity and new developments in entertainment like Gaming. So as LLMs become hyper specialized on proprietary data like Finance or Medicine and their sub-fields, there is definitely in my mind some spillover effect. So as Generative A.I. evolves things like science are improved and things like the efficacy of physicians.

We are only beginning to get an idea of how these overlaps, compounding and accelerations might take place - such as A.I. in drug development for instance. I wouldn't call it exponential tech per se, but in tandem with better compute, better model efficiency and newer models we could see some surprising emergent developments imho. In terms of multi-modal LLMs we are just at the very beginning.

Expand full comment

Well said. That's pretty much how I see it.

The spillover effects are obvious, but their magnitude is not.

Expand full comment
author

That is a good way of summarising it, but the magnitude may be very small until hallucinations and other problems are solved. We are all aware of this, for example: https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c

Expand full comment

Yeah, making up stuff is a nonstarter for legal arguments.

Still, I like the idea that one innovation travels further across more fields, and I'm encouraged by this concept. I don't know how much it speeds up our time line, but it almost certainly speeds it up some.

Expand full comment
author

If you want to support the author and this topic, please upvote and give a comment on this Reddit thread: https://www.reddit.com/r/singularity/comments/15qqz0r/how_far_are_we_from_agi/

Expand full comment
Aug 14, 2023Liked by Michael Spencer, Valentino Zocca ๐Ÿ‡ฎ๐Ÿ‡น ๐Ÿ‡ช๐Ÿ‡บ

Really well written article, covering a lot of ground and research.

Iโ€™d like to add this paper in to the mix - https://thegradient.pub/othello/

Itโ€™s an example of an LLM (Othello-GPT) creating an internal world model, which I think is a true sign of intelligence. This also challenges the view that LLMs canโ€™t exhibit intelligence by learning just from text.

If you think of LLMs as being trained on language instead of just text, then this (at least for me!) starts to make sense. Language (as opposed to text) is how we describe the world and communicate our thoughts and feelings. If LLMs are trained on our view of the world (i.e. through language), I believe that they can develop intelligence and approach AGI.

Expand full comment