Why You Should be Skeptical of AGI
Hope vs. Hype in the Generative A.I. Race & Check out the ๐ Poll at the end.
This is third installment in our series on AGI. Today I invite Jurgen Gravestein, who is a writer, consultant, and conversation designer. He was employee no. 1 at Conversation Design Institute and now works for the strategy and delivery branch CDI Services helping companies drive more business value with conversational AI. His newsletter Teaching computers how to talk is read across 38 US states and 82 countries.
Articles in the AGI Series:
Why you should be Skeptical of AGI by
This is the last day to get A.I. Supremacy at $8 a month or $75 a year, it will be going up to $10/$100 as of September, 2023.
You can also get a discount for your team, and write it off with your learning and development budget.
There are plenty of people skeptical of Generative A.I. claims that AGI is imminent, letโs try to understand why:
๐ Give a Testimonial | ๐ก Read A.I. Learning ๐ | Read A.I. Survey | ๐ About
WHY YOU SHOULD BE SKEPTICAL OF AGI CLAIMS, TOO
By
August, 2023. The Randstad, NetherlandsMany tests have been proposed to see if a machine has reached human-level intelligence. The most (in)famous is probably the Turing test, but I bet youโve never heard about the Coffee test: a machine is required to enter an average American home and figure out how to make coffee.
It sounds deceptively simple. A machine would have to figure out where the coffee is, find a filter, a mug, maybe grind the coffee if itโs just beans, boil some water and then proceed to brew. I think we can all agree this task requires more intelligence than weโre currently able to conjure up in our machines โ yet, for humans, it doesnโt get more mundane than this. The test was introduced by Steve Wozniak, co-founder of Apple, and makes an interesting case about how we should think and talk about artificial general intelligence (AGI).ย
AGI is Silicon Valleyโs favorite three letter-acronym nowadays and itโs all everyone can talk about. But what is it really and are we as close to achieving it as weโre led to believe? I decided to dive in to separate fact from fiction and hope from hype โ buckle up!
Playing fast and loose with AGI definitions
AGI is a hypothetical type of artificial intelligence that doesnโt exist yet, hence the word โhypotheticalโ. Itโs said that if itโll be invented, this machine could learn to accomplish any task that human beings can perform, as good or better than us. Achieving AGI has been mentioned as the primary goal for AI companies like OpenAI, Google DeepMind, Inflection, and Anthropic.
As of 2023, AGI remains entirely speculative. Even if it were possible to build such a thing, it hasnโt been demonstrated that it can be done or what exactly constitutes an AGI system. There seems to be no broadly agreed upon definition and because of that people are more than happy to play fast and loose with its definition. All depending on whether they want to invoke fear, inspire, or are looking to secure their next venture capital injection.
Expert opinions vary on if and when AGI will arrive. Some say weโre terribly close. Dario Amodei, CEO of Anthropic, for example, has said in a recent podcast that he think we will achieve AGI in 2-3 years.ย
AI researcher Geoffrey Hinton, who recently left Google, has stated:
(โฆ) I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that. Many others say we are much further away. And yet others say we will never achieve it.
The opinions of experts that are employed by some of the aforementioned AI companies should be taken with a grain of salt. Not because they are lesser experts, but because from a business standpoint itโs always better to slightly overstate your AI capabilities than downplaying them, as downplaying gives off the impression of being โbehindโ.
Progress has been undeniably impressive over the past few years, though. GPT-1 was released in 2018 and had 117 million parameters. Its fourth generation, GPT-4, released in March this year, is rumored to consist of eight models with 220 billion parameters each, and capable of a lot! Itโs safe to say that the technology has ushered in the first new UI paradigm shift in 60 years, but if it has brought us any closer to AGI remains up for debate.
Sam Altman, CEO of OpenAI, is optimistic but has acknowledged publicly that further progress will not necessarily come from making models bigger.
Yann LeCun, Chief AI Scientist at Meta, less optimistic, called LLMs an โoff-rampโ to achieving AGI.
Gary Marcus has even suggested generative AI could turn out to be a dud. He argues that the technology we have today is built on completion and not on factuality, and thereโs a good chance the hallucination problem canโt be solved.
๐ค Top Posts by the Author:
๐ข New Newsletter Sign-up | ๐ฆธ๐ปโโ๏ธ Benefactor | ๐ฅ Guest Posts | ๐ Give a Testimonial
How do we know if theyโre smart?
Since we donโt really agree on a definition, there are also no objective measures to tell if a system has reached or surpassed human level intelligence โ but that doesnโt keep us from trying.ย
We love to subject current AI systems to all sorts of benchmarks and tests originally designed for humans. OpenAIโs GPT-4 reportedly scored in the top percentile on the Uniform Bar Exam, the Graduate Record Exam, the US medical licensing exam, and aced several high-school Advanced Placement tests. However, in a recent article in Science by Melanie Mitchell, she explains why we should be cautious in interpreting this as evidence for human-level intelligence.
One of the big problems is data contamination. This is when a system has already seen the answers to the questions before. For one of the coding benchmarks, GPT-4โs performance on coding problems from before 2021 was significantly better than problems published after 2021 โ the year of GPT-4โs training cutoff. I recommend reading this article from AI Snake Oil if youโd like to know more about this particular case.
Because of the lack of transparency from AI companies like OpenAI, itโs impossible to prove contamination with certainty. Transparency in general is not in their best interest, not just for competitive reasons, but also because they would much rather report their successes than publicly acknowledge their flaws.
Even if a language model hasnโt literally seen an exact problem on a training set, it mightโve seen examples that are close (simply because of the ginormous size of its training corpus) allowing it to get away with a shallower level of reasoning. And whether or not current AI systems can really reason is a hot topic of discussion on its own, as well. A paper published last month went even as far as to say that GPT-4 canโt reason at all, concluding that โdespite its occasional flashes of analytical brilliance, GPT-4 at present is utterly incapable of reasoningโ.
Some jumped to the defense in an attempt to refute, showing that with the right prompting they were able to get the right answers out of the system.
All it shows is that the system provides different results under different circumstances, demonstrating precisely its inconsistency and unreliability, which suggest that something else is happening under the hood that might appear as reasoning โ but is in fact not reasoning at all?
Machine intelligence vs. human intelligence
The problem is that we desperately want AGI to be like us. We're training AI systems to speak and act like us, but at the same time we couldnโt be more different from one another.
In many ways GPT-4 is already โsmarterโ than the average person, in the sense that it can access information and formulate answers more quickly than any of us on a vast array of topics. At the same time, itโs much more โstupidโ. It canโt plan ahead very well or reason reliably (even though it gives off the impression it can). It might be able to express words of empathy, but doesnโt feel anything (even though it gives off the impression it can). And it has no will or thoughts of its own (it only moves when prompted). Ergo: we have built a system that may appear to be smart or empathetic, but could just be posing as such, giving you the impression that it is.
And yes, these systems are able to perform tasks that typically require human intelligence, but they are not arriving at them the same way we do. A calculator is superior to a human at performing math, just like a chess computer is superior to a human at playing chess, yet, we wouldnโt call these machines intelligent. So, whatโs so different about GPT-4?
This machine speaks our language! This machine is more fluent than any other machine that came before it and it messes with our heads. As humans, we tend to project intelligence, agency, and intent onto systems that provide even the smallest hint of linguistic competence, which is commonly referred to as the ELIZA-effect. Weโre tempted to see the ghost in the machine, but in reality, chatbots like ChatGPT are not much more than glorified tape recorders and, according to futurologist and theoretical physicist Michio Kaku, the public anxiety over this technology is grossly misguided.
These AI systems donโt learn from first principles and experience, like us, but by crunching as much human-generated content as possible. A process that requires warehouses full of GPUs and can hardly be called efficient. Pre-trained models are then trained again through a process called reinforcement learning with human feedback (RLHF) to make them more accurate, coherent, and more aligned with our values. In a way, weโre trying to brute force intelligence by throwing as much compute at it as possible and then tinkering with it to optimize for human preferences.ย
What we end up with is not humanlike intelligence, but a form of machine intelligence that appears human-like. If thatโs a concept of intelligence youโre comfortable with, then AGI might indeed be near. Unfortunately, it also means you then must acknowledge that a machine is only as smart as the next person it fools into believing itโs smart.
So whatโs next for AGI?
Despite all the ambiguity and lack of good measurements, it wonโt keep AI companies from claiming theyโve reached AGI in the future. I even suspect OpenAI to claim AGI when they finish training the next generation of their model, GPT-5 โ and thatโs alright. When the time comes, weโll argue about it online, and Iโll continue to brew my own coffee in the morning for the time being.
Something we do have to take into calculation is the possibility of another breakthrough. Todayโs generative AI revolution was ignited by the invention of the transformer back in 2017. What if the next giant leap turns out to be the catalyst that gives us real artificial general intelligence?
We might find some answers in Mustafa Suleymanโs upcoming book โThe Coming Waveโ. For those who donโt know, Suleyman is the co-founder and CEO of Inflection AI and previously co-founded DeepMind.
An excerpt:
As technology proliferates, more people can use it, adapt it, shape it however they like, in chains of causality beyond any individualโs comprehension. One day someone is writing equations on a blackboard or fiddling with a prototype in the garage, work seemingly irrelevant to the wider world. Within decades, it has produced existential questions for humanity. As we have built systems of increasing power, this aspect of technology has felt more and more pressing to me.
Technologyโs problem here is a containment problem. If this aspect cannot be eliminated, it might be curtailed. Containment is the overarching ability to control, limit, and, if need be, close down technologies at any stage of their development or deployment. It means, in some circumstances, the ability to stop a technology from proliferating in the first place, checking the ripple of unintended consequences (both good and bad).
The more powerful a technology, the more ingrained it is in every facet of life and society. Thus, technologyโs problems have a tendency to escalate in parallel with its capabilities, and so the need for containment grows more acute over time.
Does any of this get technologists off the hook? Not at all; more than anyone else it is up to us to face it. We might not be able to control the final end points of our work or its long-term effects, but that is no reason to abdicate responsibility. Decisions technologists and societies make at the source can still shape outcomes. Just because consequences are difficult to predict doesnโt mean we shouldnโt try.
It feels like the book is written with a deep sense of urgency and in general a message we can all get behind, I think.
When we reach AGI might be up for debate, but what isnโt up for debate is that in the coming decades humanity will be faced with a lot more uncertainty. The societal impact of increasingly advanced AI systems are going to be uniquely disruptive and itโs up to us to face those challenges head on.
๐Read A.I. Startups | ๐ Read A.I. Papers | ๐ Quantum Foundry | ๐ Data Science
AGI Survey
Thank for reading. To check out more guest posts by talented A.I. writers go here.
Some of this seems to echo points made by Noam Chomsky who points to the elegance and efficiency of the human mind when compared to the brute force inaccuracy of LLMs. One issue I have, is so many people seem to ignore their own experience of using these systems. ChatGPT I have found to be scarily inaccurate when it comes to code, but presents its answers with such confidence I expect many people are taken in by the ultra-confident โpersonalityโ of the bot.
AGI is a very human hallucination