Is Meta's AI CICERO Dangerous?
A.I. is being trained to play the games people play with serious real-world applications.
Hey Everyone,
Is it just me or does Meta’s AI creations have a tendency of going somewhat sideways. In this respect I’m a bit worried about Meta’s AI CICERO. It is what the company claims as the first AI to play the strategy game Diplomacy at a human level.
I’ve been concerned about what would happen if we really did invent AGI or Superintelligent artificial intelligence. And I think we will invent Machiavellian machines that will outsmart us.
I see the appeal of OpenAI training AI on the game Microsoft bought called Minecraft by watching YouTube videos. The bot developed by OpenAI is a near-perfect example of imitated learning or supervised learning.
However Meta AI’s CICERO is all about A.I. learning the art of deception. Is this what we want to be training A.I. in?
So what is CICERO?
Meta AI is claiming that CICERO is a step forward in human-AI interactions with AI that can engage and compete with people in gameplay using strategic reasoning and natural language.
More recently, Meat AI announced The Galactica AI model, that was trained on scientific knowledge – but shockingly began to spat out alarmingly plausible nonsense! Meta AI doesn’t seem to do its due diligence before announcing stuff to the world, and what happens if they make a more serious mistake one day?
Keep reading with a 7-day free trial
Subscribe to AI Supremacy to keep reading this post and get 7 days of free access to the full post archives.