Why A Terminator Scenario Suddenly is Not So Implausible
A.I. is converging with Military Funding
Hey Everyone,
Generative A.I. isn’t just ChatGPT rage hype, it’s also going to change the future of national security, cybersecurity, military surveillance and military operations. But how will Nation states regulate A.I. in warfare and its use for example in controlling automated drones?
If China invades Taiwan, it won’t be the sloppy work of Russia invading the Ukraine. China will leverage their considerable Quantum computing and A.I. prowess in such a military operation. As National security budgets are being ramped up in both A.I. and Quantum, one has to wonder about the A.I. Risk.
There is a sense we aren’t taking regulation on the National security and military ops side very seriously. And who is actually being held accountable and how will these Bills even be enforced?
American Department of Defense policy already bans artificial intelligence from autonomously launching nuclear weapons, but if an AGI became an ASI (artificial super intelligence), how would this be enforced exactly?
A.I. is revolutionizing warfare, it’s not a question of how but when. If China is doing it, other countries need to keep up. The new arms race in technology has no rules and few guardrails. Killer drones are becoming more common. Automated defense systems including threat detection is the new normal.
The DoD has an almost laughable stance, when you consider what DARPA is actually doing.
In October 2022, the U.S. Department of Defense released its National Defense Strategy, which included a Nuclear Posture Review. Notably, the department committed to always maintain human control over nuclear weapons: “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.”
What is “meaningful human control”, in an era of commercial blitzing of A.I. hype and huge contracts with the National Defense industry? What is meaningful human control in an era where Microsoft is promising “sparks” of AGI in their software? The only bodies of authority I respect less for their integrity and trustworthiness vis-à-vis National Governments, are corporate BigTech leaders.
Congress Knows Best
A bipartisan group of US lawmakers introduced legislation on April, 26th, 2023 to bar artificial intelligence (AI) from being allowed to make launch decisions within the US's nuclear command and control process.
Let’s see how that turns out.
As announced last week (Verge notes), Senator Edward Markey (D-MA) and Representatives Ted Lieu (D-CA), Don Beyer (D-VA), and Ken Buck (R-CO) have introduced the Block Nuclear Launch by Autonomous AI Act, which would “prohibit the use of Federal funds to launch a nuclear weapon using an autonomous weapons system that is not subject to meaningful human control.” The act would codify existing Pentagon rules for nuclear weapons, which, as of 2022, read thusly:
“In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.”
What is the track record of the United States maintaining humans in the loop with regards to artificial intelligence so far? No, don’t answer that question.
Leave it to members of Congress to sound intelligent on technology and A.I.
"While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited," Rep. Ken Buck, R-Colo., said last week.
This is going to get comical, and frankly, dangerous very fast. If you care about these topics, support the channel. In my coverage I try to highlight important issues of A.I. on their impact on society and civilization, not just in business and technology.
One more cup of coffee for the road
One more cup of coffee 'fore I go
To the valley below - Bob Dylan
While Doomers who speculate on A.I. risk literally like to have guessing games, the future of this will likely depend more on China and the U.S. than anybody else. China appears to be taking A.I. regulation seriously, the U.S. not so much.
Someone made a visualization about the author’s estimates: here. But what are the U.S., China and Europe actually learning about the Ukraine war, and what kind of geopolitical experiments will it lead to? This is a dire topic and many scenarios could become a reality thanks to the intersection of A.I. with warfare and huge and rising National Security budgets.
Block Nuclear Launch by Autonomous Artificial Intelligence Act.
Such Bills fail to even realize how fast A.I. is moving or how an AGI or an ASI might respond to our level of collective geopolitical, socio-economic and political disorganization and discord.
A.I. risk might become a thing, that is A.I. could equate with Nuclear weapons, in more ways than one. Meaningful control (2016), you say?
Silicon Valley CEOs who makes hundreds of millions of dollars claiming A.I. is the new “fire” aren’t helping. That’s not what Responsible A.I. actually looks like. Flash back to 2018, it wasn’t so long ago Google was doing evil.
"It is our job as Members of Congress to have responsible foresight when it comes to protecting future generations from potentially devastating consequences," said Rep. Ted Lieu, D-Calif., who’s been vocal on the dangers of allowing AI to keep rapidly developing unchecked.
The U.S. has no problem with killer drones. But A.I. pressing the button, that’s a big no-no.
If this is already forbidden, why introduce the bill? It’s just a “sponsors note” that a 2021 National Security Commission on Artificial Intelligence report recommended affirming a ban on autonomous nuclear weapons launches, not only to prevent it from happening inside the US government but to spur similar commitments from China and Russia. Do you think this will hold? If I was an ASI, would it hold?
Big name technologists like Sam Altman and Marc Andreessen talk about it, using “in” terms like “misalignment” and “the paperclip maximization problem.” But if you are stoking AGI to sell your product, be careful what you are building.
If Eliezer Yudkowsky forms a doomsday suicide cult, it would not surprise me. AGI discussions aren’t just late-night dorm room talk any longer, DARPA and PLA are integrating new kinds of AI and Quantum into their tactics, instruments, contingency ops and strategy more and more even as warfare hits the space-age.
Cosponsors of the Block Nuclear Launch by Autonomous Artificial Intelligence Act in the Senate include Bernie Sanders (I-Vt.) and Elizabeth Warren (D-Mass.). There are political, geopolitical and China A.I. tangents here most of us are missing. It’s easily foreseeable that we are moving quickly to a world where humans will be less and less “in the loop”.
If the CCP and an aging U.S. President have their finger on the button, are we all in the loop? Nuke-launching AI might actually be inevitable. The A.I. alignment safety and risk mitigation work we do today, might actually save the lives of millions of people in the future.
National Security concerns us all at the intersection of A.I., because the weaponization of A.I. will certainly take place. I’m not clear on how A.I. regulation combats it or if leadership will fail yet another existential crisis for humanity at large?
I watch National security at the intersection of A.I. and I have to say I’m not impressed. In one sense, the landscape for debating possible legal frameworks for lethal autonomous weapons systems (LAWS) is quite dynamic but the results are staggeringly lack-lustre. Since 2014, the U.N.’s Group of Governmental Experts (GGE) on LAWS has met at multiple plenaries to discuss possible guardrails for the use of these systems. So far as I’m aware, member states have only agreed to 11 guiding principles in this area rather than a LAWS treaty or protocol that could be added to existing treaties.
Should it even be legal for a President to be over 80 years old? So little rule of law in the things that actually matter. Many of my new readers question why I’m not more optimistic about A.I.? How am I supposed to answer them when I’m reading dozens if not a hundred new articles each day on the topic. When A.I. regulation and rule of law around things like this are so obviously weak on purpose to benefit commercial interests? When terms like the “democratization of AI” are used even as A.I. is being introduced to bolster surveillance capitalism and killer drones that should be illegal.
We know A.I. is being leveraged for National Security and Military operations that seem increasingly more likely.
AI can benefit the military in numerous ways including:
Satellite surveillance
Warfare systems
Strategic decision making
Data processing and Research
Combat Simulation
Target Recognition
Threat Monitoring
Drone Swarms
Cybersecurity
Transportation
Casualty Care and Evacuation
We can definately add Nuclear weapons to the list. Let’s get real. There may be no “kill-switch” for what’s coming. OpenAI, Microsoft and Google seem to believe AGI is just around the corner. If that is true, what implications does it have on National Security and nuclear weapons?
Thanks for reading!
Disappointed in this one. You object to Silicon Valley’s AI hype, yet your own Terminator headline and talk of ASI is equally breathless. You are stoking fears just as you say SV is stoking AGI to sell products.
No, AI-enabled killer drones are not imminent. Yes, there are people in the US military who actually care about responsible AI. Yes, there are guardrails and rules in place in the US military to ensure that AI is used in safe, lawful, and ethical ways. More so in fact than in some commercial companies.
You do your readers a disservice by talking about military AI in ways that are not matched by the facts. If people really understood the state of AI in the US military today, they would likely be underwhelmed. Not running for the hills because Terminator might be just around the corner.
This military money might bring advancements which benefit the wider population as well. Maybe...