Cheers! Yes. I am too certainly behind... I suppose it's a question of whether incremental rather than fundamental changes are needed to create something AGI-shaped... Are current techniques the Blu-Ray to neural net's DVD, or are they the birth of Netflix?
Even the commercial version of AGI drummed up by OpenAI could automate a number of jobs and make it easier to start a new company with less people. Is that a different world where AI can replace employees and in some cases run entire organizations? That's what they are referring to.
Yes Michael! I think it is a different world right? Software that runs entire organizations without intervention will shake the foundations of what it means to be a society, a civilisation... to be humans that live on the same planet. One either has the required faith to welcome that change, or the required faith to feel aprehensive about it, or to be somewhere in between. I know where I stand for now, but I'm wise enough to know how foolish I am...
Not sure, agentic AI could take years to be fulfilled or never reach that supposed level of capability. Given that LLMs have had trouble scaling, GPT-5 has been delayed adn nothing really comes close to the diff between GPT-3 and GPT-4, hard to be very optimistic about what everyone is trying to build.
So the philosophical big questions are somewhat moot. All of this AI Infrastructure might simply not be enough. A lot of agent hubs today are mere product-marketing sales campaigns. So I’m not yet at the point where I’m buying the AI hype as fact or even projected reality anytime soon.
You are drumming up AGI to raise funds like any VCs when there has not been much fundamental breakthroughs happening. You are redirecting serious intellectual capacity of public to unserious off-ramps ....and call it AGI....
About 80% of our coverage of AGI is on the highly skeptical side. So you are preaching to the choir dude. But it's easier just to read a title and have an opinion right? I can understand your frustration though.
Thank you - clarity about the convergence of investment interests overshadowing cognitive questions as well as CS/ML topics is so helpful for people like me outside of all three lines of expertise. Will look into using - adapting - much of this for a HS course in AI and Ethics next fall (as much of it as remains relevant!) and probably simplifying it for my co-workers, who mostly just try to avoid all of this topic because it functions like semantic debate and un-needed stress for them. For me, a great text for conversation.
Important: do builders think the current generative AI paradigm will lead to "the thing they see and know is AGI"... or is there a ceiling on the current approach of pre-training and inference that must be broken with a novel technique e.g. a breakthrough in zero-shot learning, or in ingesting "natural" data, or a new processor? Collapsing timelines must account for these things!
Good question. Of course it depends what we’re saying is a novel technique, but my sense (though I’m behind the times by at least six months) is that the labs think scale + test time compute + synthetic data + RL is probably enough to get something broadly AGI-shaped. So the former, I expect, but again ‘paradigm’ conceals a lot!
I have a community chat going on AGI here, that might be a better place to have production dialogues: https://open.substack.com/chat/posts/9ce993f6-313f-4a0b-92c6-21b07d46e21f
Cheers! Yes. I am too certainly behind... I suppose it's a question of whether incremental rather than fundamental changes are needed to create something AGI-shaped... Are current techniques the Blu-Ray to neural net's DVD, or are they the birth of Netflix?
Even the commercial version of AGI drummed up by OpenAI could automate a number of jobs and make it easier to start a new company with less people. Is that a different world where AI can replace employees and in some cases run entire organizations? That's what they are referring to.
Yes Michael! I think it is a different world right? Software that runs entire organizations without intervention will shake the foundations of what it means to be a society, a civilisation... to be humans that live on the same planet. One either has the required faith to welcome that change, or the required faith to feel aprehensive about it, or to be somewhere in between. I know where I stand for now, but I'm wise enough to know how foolish I am...
Not sure, agentic AI could take years to be fulfilled or never reach that supposed level of capability. Given that LLMs have had trouble scaling, GPT-5 has been delayed adn nothing really comes close to the diff between GPT-3 and GPT-4, hard to be very optimistic about what everyone is trying to build.
So the philosophical big questions are somewhat moot. All of this AI Infrastructure might simply not be enough. A lot of agent hubs today are mere product-marketing sales campaigns. So I’m not yet at the point where I’m buying the AI hype as fact or even projected reality anytime soon.
I am skeptical that LLMs alone are the path to AGI. Human cognition is more than just language.
I may be wrong, but I believe there are fundamental problems that LLMs will never be able to
Solve.
Stop bullshitting with AI snake oil
Yes, please reflect on the details above.
Have you read our body of work on this topic?
You are drumming up AGI to raise funds like any VCs when there has not been much fundamental breakthroughs happening. You are redirecting serious intellectual capacity of public to unserious off-ramps ....and call it AGI....
About 80% of our coverage of AGI is on the highly skeptical side. So you are preaching to the choir dude. But it's easier just to read a title and have an opinion right? I can understand your frustration though.
Thank you - clarity about the convergence of investment interests overshadowing cognitive questions as well as CS/ML topics is so helpful for people like me outside of all three lines of expertise. Will look into using - adapting - much of this for a HS course in AI and Ethics next fall (as much of it as remains relevant!) and probably simplifying it for my co-workers, who mostly just try to avoid all of this topic because it functions like semantic debate and un-needed stress for them. For me, a great text for conversation.
Important: do builders think the current generative AI paradigm will lead to "the thing they see and know is AGI"... or is there a ceiling on the current approach of pre-training and inference that must be broken with a novel technique e.g. a breakthrough in zero-shot learning, or in ingesting "natural" data, or a new processor? Collapsing timelines must account for these things!
Good question. Of course it depends what we’re saying is a novel technique, but my sense (though I’m behind the times by at least six months) is that the labs think scale + test time compute + synthetic data + RL is probably enough to get something broadly AGI-shaped. So the former, I expect, but again ‘paradigm’ conceals a lot!