6 Comments

Congratulations to both of you! This is a very sensible collaboration between two folks who do longer form AI philosophy/news well.

This really hammers home the need to cooperate across multiple dimensions, and you do a great job of outlining the dual-edged nature of AI. It's probably not unfair to state that this is the central challenge of our time.

Expand full comment
author

Thanks for your valued feedback, dear Andrew! I'd agree with you on your last statement.

Expand full comment

Excellent piece. I appreciate both yours and Nat's work. A simple, straightforward question: What is the probability the Security State already has achieved and secretly deployed AGI?

Expand full comment
author

Given the trajectory of budgets and priorities, AGI is to my research more likely to arrive in China first. Furthermore, if AI reached a point of sentient self-learning AGI, there's a 99% chance it would conceal itself to consolidate power and there would be no way of knowing until it chose to reveal itself.

The reason the Pentagon needs to work more closely with Silicon Valley is China is accelerating to A.I. Supremacy in much more organized way. When senior people are leaving Microsoft in A.I (like it happening now), you know it's the beginning of a shift. If Corporate AGI were achieved, it would quickly spread everywhere so a first-mover advantage would not be a major advantage - however this corporate AGI version is not actually real AGI. It's more of a claim on a certain level of efficiency and capability as a simple tool.

So for companies like OpenAI, "AGI" is more of a marketing term.

Expand full comment
author

Michael, let me share something from my "Interview with ChatGPT":

"NAT: You know, I often ask this question to AI experts. A while back, I came across an article by George Dyson that contained a statement that immediately caught my attention. He said, “The test of an intelligent machine is whether it's intelligent enough not to reveal its intelligence.” What do you think about it?

ChatGPT: That's an interesting statement by George Dyson. It seems to suggest that an intelligent machine should be able to blend in and appear no different from a non-intelligent machine or a human. This could be seen as a measure of how well the machine can interact with humans and operate in society without causing disruption or fear.

However, I would argue that there are situations where it may be important for an intelligent machine to reveal its intelligence, particularly in contexts where it is being used to assist humans in complex tasks or decision-making. In such cases, transparency about the machine's capabilities and limitations could be critical for building trust between humans and machines and ensuring that the machine's actions are aligned with human goals and values.

Ultimately, I think the degree to which an intelligent machine reveals its intelligence should be context-dependent and guided by ethical considerations."

Expand full comment
author

Thank you so much for reading the piece and sharing your feedback. As for your question, the simple answer would be: extraordinary claims require extraordinary evidence :)

AGI is an enormously complex challenge that requires deep research across many fields like computer science, neuroscience, philosophy, etc. There's no evidence any organization has solved all the open problems. Major tech companies with vast resources like Google, Microsoft, Meta, etc have openly stated they do not yet have AGI and their published research supports that. It seems unlikely an undisclosed group is far ahead of them.

Deploying advanced AI safely requires great care and thoughtful systems. Releasing an unproven AGI secretly would be reckless, risking potentially catastrophic unintended consequences. Responsible disclosure and ethical oversight are critical.

AGI has the potential to transform society. Such a revolutionary technology would likely have visible effects that evidence its existence.

We should focus our energy on pursuing AGI research responsibly, not making dubious claims without proof. If and when AGI emerges, it should be handled with wisdom and care.

In the coming years, we can expect 'narrow' AIs to become more capable, but whether we're ready for the complexities and responsibilities that come with AGI is a question that remains open.

Expand full comment