Some of this seems to echo points made by Noam Chomsky who points to the elegance and efficiency of the human mind when compared to the brute force inaccuracy of LLMs. One issue I have, is so many people seem to ignore their own experience of using these systems. ChatGPT I have found to be scarily inaccurate when it comes to code, but presents its answers with such confidence I expect many people are taken in by the ultra-confident โpersonalityโ of the bot.
Thank you for your thoughtful reply! I'm definitely sympathetic towards Noam Chomsky's views, we have a tendency to underestimate exactly how intelligent we really are. The level of information our brains are able to process and learn from is truly astonishing and frankly poorly understood.
AGI is a bit like a case of Anthropomorphism in AI.
Some have even equated hallucinations in LLMs with the creativity of adolescence. If you consider how much culture, environment, emotion and beliefs are mixed in with human ideas, it would seem hallucinating is pretty normal.
Thanks for you reply! Everything is possible on an infinite timescale. I think the hard thing about predicting the future is that it's incredibly easy to think of all the possible futures, but incredibly hard to predict probable futures.
Some of this seems to echo points made by Noam Chomsky who points to the elegance and efficiency of the human mind when compared to the brute force inaccuracy of LLMs. One issue I have, is so many people seem to ignore their own experience of using these systems. ChatGPT I have found to be scarily inaccurate when it comes to code, but presents its answers with such confidence I expect many people are taken in by the ultra-confident โpersonalityโ of the bot.
Thank you for your thoughtful reply! I'm definitely sympathetic towards Noam Chomsky's views, we have a tendency to underestimate exactly how intelligent we really are. The level of information our brains are able to process and learn from is truly astonishing and frankly poorly understood.
AGI is a very human hallucination
I like this quote very much.
AGI is a bit like a case of Anthropomorphism in AI.
Some have even equated hallucinations in LLMs with the creativity of adolescence. If you consider how much culture, environment, emotion and beliefs are mixed in with human ideas, it would seem hallucinating is pretty normal.
Thanks for your reply! We humans are definitely not immune to hallucinations ourselves ;)
Great points. I believe AGI is inevitable. We have to embrace for the good as much as the bad.
Thanks for you reply! Everything is possible on an infinite timescale. I think the hard thing about predicting the future is that it's incredibly easy to think of all the possible futures, but incredibly hard to predict probable futures.