7 Comments
Aug 31, 2023Liked by Jurgen Gravestein

Some of this seems to echo points made by Noam Chomsky who points to the elegance and efficiency of the human mind when compared to the brute force inaccuracy of LLMs. One issue I have, is so many people seem to ignore their own experience of using these systems. ChatGPT I have found to be scarily inaccurate when it comes to code, but presents its answers with such confidence I expect many people are taken in by the ultra-confident โ€˜personalityโ€™ of the bot.

Expand full comment
author

Thank you for your thoughtful reply! I'm definitely sympathetic towards Noam Chomsky's views, we have a tendency to underestimate exactly how intelligent we really are. The level of information our brains are able to process and learn from is truly astonishing and frankly poorly understood.

Expand full comment
Aug 31, 2023Liked by Michael Spencer, Jurgen Gravestein

AGI is a very human hallucination

Expand full comment
author

I like this quote very much.

AGI is a bit like a case of Anthropomorphism in AI.

Some have even equated hallucinations in LLMs with the creativity of adolescence. If you consider how much culture, environment, emotion and beliefs are mixed in with human ideas, it would seem hallucinating is pretty normal.

Expand full comment
author

Thanks for your reply! We humans are definitely not immune to hallucinations ourselves ;)

Expand full comment
Aug 31, 2023Liked by Jurgen Gravestein

Great points. I believe AGI is inevitable. We have to embrace for the good as much as the bad.

Expand full comment
author
Aug 31, 2023ยทedited Aug 31, 2023Author

Thanks for you reply! Everything is possible on an infinite timescale. I think the hard thing about predicting the future is that it's incredibly easy to think of all the possible futures, but incredibly hard to predict probable futures.

Expand full comment