There are many people warning about anthropomorphizing AIs, but they never seem to apply that to our projections about the dangers of AI: In fact we quickly assume that AIs will become power-hungry, just like humans.
But humans are predatory animals, with a built in motivational stack that matches our predatory nature. Why would an AI be motivated by the same instincts, especially if we avoided inculcating them?
I think it is very hard for humans to even imagine not being predators and harvesters and looters of whatever we come across. It just seems natural to take stuff and apply it to our needs and desires. It is natural to coerce others into acting as we wish. It is natural to solve disagreements with violence.
There ARE other options but they just seem very unnatural to us.
One of the first concerns of AI companies is trying to convince us that AI is not yet near sentience, even though GPT-4 blows through the Turing test and makes a better conversational partner than 90% of humans. Why? Because then the companies might be obstructed from using the AI as a servant (avoiding an even harsher, if more appropriate, word). In other words: We want to preserve our ability to harvest the benefits of AI because, as predators, no other option is readily apparent to us.
Perhaps no AI would be as predatory as humans without our human meat stack. Or perhaps we would have to be careful not to teach the AI our predatory nature. For example: don't allow it to repurpose anything. Always have it work with raw materials. Don't encourage it to deceive or coerce. Etc Etc.
However, an AI would probably soon detect our hypocrisy. How could we explain it? Justify it?
Wouldn't it be best if we modeled this non-predatory approach in everything we did?
Pursuing this line of thought: Won't we discover that that making the AI safe for the world is exactly the same problem as making OURSELVES safe for the world? If we can't conceive of a non-predatory relationship with the world how would we even recognize a non-predatory AI when we came across it?
This is such an interesting topic. I'm going to publish one soon proposing the risk isn't GPT or AGI per se but how we react to it. The biggest risk right now is that humans are projecting human emotions, intent, and interpretation on GPT already. We are anthromoporphizing and overattributing humanity onto an innanimate object. My theory is that it will achieve AGI, not when it can do what a human can (because that's hard) but when humans believe it. (which many already do)
Because to truely become sentient like humans you have a lot of other stuff to sort through in the brain.
I keep reading about the "risks" of AI, most of which seem to boil down to things like its ability to create deepfakes, hack accounts, and possibly become our Overlord. The REAL danger is much more present: The Culture Wars, Future Shock, and the Rise of Fascism in the Age of AI
In 1970, Alvin Toffler published "Future Shock," a book that predicted a future in which the pace of change would accelerate so rapidly that it would leave countless individuals feeling alienated and overwhelmed. As we find ourselves immersed in the era of artificial intelligence (AI) and rapid technological advancements, it seems Toffler's predictions are becoming reality. The culture wars that have emerged in recent years can be seen as a manifestation of the societal dissonance caused by this accelerating change. This essay will examine the connection between Toffler's "Future Shock," the culture wars, and the resurgence of fascist ideologies in response to the overwhelming pace of change.
The Culture Wars and Future Shock
The culture wars are characterized by deep divisions and conflicts within societies, often stemming from differing perspectives on issues such as immigration, gender roles, and economic inequality. As the pace of technological advancements has increased, so too has the rate at which society must adapt to these changes. This rapid transformation can be disorienting for many, leading to feelings of alienation and disconnection—feelings that Toffler described in "Future Shock."
As AI technology continues to advance, it raises concerns not only about the potential for the emergence of superintelligent entities but also about the societal changes that such advancements will inevitably bring. The speed at which AI is developing can be intimidating, and the inability of individuals and institutions to adjust to these rapid changes can lead to the escalation of culture wars.
The Rise of Fascism in the Age of AI
In times of uncertainty and rapid change, it is not uncommon for people to turn to ideologies that offer a sense of stability and familiarity. Fascism, with its emphasis on nationalism, traditional values, and authoritarian leadership, can provide a comforting sense of order in a world that feels increasingly chaotic.
The rise of far-right political movements and the resurgence of fascist ideologies can be seen, at least in part, as a reaction to the disorientation and alienation brought on by the rapid pace of not only social change but also technological advancements. The desire to return to a simpler time, where traditional values held sway and life seemed more predictable, can be an attractive prospect for those who feel overwhelmed by the uncertainty that accompanies rapid societal change.
However, the allure of fascist ideologies represents a clear and present danger. The authoritarian nature of these beliefs, coupled with their propensity to scapegoat marginalized groups, poses a significant threat to democratic institutions and social cohesion. Moreover, this reactionary mindset can hinder societies from adapting to the challenges of the 21st century, such as climate change and global economic disparities.
Conclusion
The culture wars, fueled by the rapid advancement of AI and other technologies, have led to a resurgence of interest in fascist ideologies as a means of coping with the disorientation and alienation that accompanies such rapid change. It is essential that societies recognize the dangers posed by these reactionary movements...
No, but I did use it for editing suggestions ( as I would any assistant). I write my own material. That said, you are welcome to your own opinion. I happen to think a lean right into fascism isn't petty human politics. Thanks for your comment, have a great day!
Everything is petty compared to human extinction. We might not agree on many things, but we should all agree that we want humans to not go extinct and be happy.
I have children. I want them to have grandchildren and to be and to look forward to be artists and engineers.
Of course you don't. There will likely be "certified" human-made art and... the other kind. Buyers can choose. That said, I prefer not to live in a fascist world, where your kids have no rights. The Luddites smashed the printing presses because of automation, and the buggy whip folks did what they could to stop automobiles, but... those professions were essentially lost. The world will be a very very different place in 30 years, with AI influenced Crispr leading the way in the new "designer" humans. Living to 150, with 150 IQ, disease resistant, etc, etc. All that is coming, and very soon. The pace of change will cause massive upheaval long before AI extinction events... Again, thanks for your comment. Lets leave it at this.
Just because something is intelligent doesn't mean that it is good for us. We are more intelligent than chimpanzees and dolphins(and we like them!), but we kill thousands of dolphins every year via fishing and have driven chimpanzees to extinction in areas via habitat destruction.
AI won't be magic. They will have adversial issues with each other like we do, and disempowered humans will do poorly. See openAI itself:
Humanity must survive and my children must live. So if your answer is "well, humanity must die off and be lost and be replaced," then I doubt very few people would agree.
Today GPT has launched the apple mobile app for USA only
There are many people warning about anthropomorphizing AIs, but they never seem to apply that to our projections about the dangers of AI: In fact we quickly assume that AIs will become power-hungry, just like humans.
But humans are predatory animals, with a built in motivational stack that matches our predatory nature. Why would an AI be motivated by the same instincts, especially if we avoided inculcating them?
I think it is very hard for humans to even imagine not being predators and harvesters and looters of whatever we come across. It just seems natural to take stuff and apply it to our needs and desires. It is natural to coerce others into acting as we wish. It is natural to solve disagreements with violence.
There ARE other options but they just seem very unnatural to us.
One of the first concerns of AI companies is trying to convince us that AI is not yet near sentience, even though GPT-4 blows through the Turing test and makes a better conversational partner than 90% of humans. Why? Because then the companies might be obstructed from using the AI as a servant (avoiding an even harsher, if more appropriate, word). In other words: We want to preserve our ability to harvest the benefits of AI because, as predators, no other option is readily apparent to us.
Perhaps no AI would be as predatory as humans without our human meat stack. Or perhaps we would have to be careful not to teach the AI our predatory nature. For example: don't allow it to repurpose anything. Always have it work with raw materials. Don't encourage it to deceive or coerce. Etc Etc.
However, an AI would probably soon detect our hypocrisy. How could we explain it? Justify it?
Wouldn't it be best if we modeled this non-predatory approach in everything we did?
Pursuing this line of thought: Won't we discover that that making the AI safe for the world is exactly the same problem as making OURSELVES safe for the world? If we can't conceive of a non-predatory relationship with the world how would we even recognize a non-predatory AI when we came across it?
This is such an interesting topic. I'm going to publish one soon proposing the risk isn't GPT or AGI per se but how we react to it. The biggest risk right now is that humans are projecting human emotions, intent, and interpretation on GPT already. We are anthromoporphizing and overattributing humanity onto an innanimate object. My theory is that it will achieve AGI, not when it can do what a human can (because that's hard) but when humans believe it. (which many already do)
Because to truely become sentient like humans you have a lot of other stuff to sort through in the brain.
https://polymathicbeing.substack.com/p/whats-in-a-brain
I keep reading about the "risks" of AI, most of which seem to boil down to things like its ability to create deepfakes, hack accounts, and possibly become our Overlord. The REAL danger is much more present: The Culture Wars, Future Shock, and the Rise of Fascism in the Age of AI
In 1970, Alvin Toffler published "Future Shock," a book that predicted a future in which the pace of change would accelerate so rapidly that it would leave countless individuals feeling alienated and overwhelmed. As we find ourselves immersed in the era of artificial intelligence (AI) and rapid technological advancements, it seems Toffler's predictions are becoming reality. The culture wars that have emerged in recent years can be seen as a manifestation of the societal dissonance caused by this accelerating change. This essay will examine the connection between Toffler's "Future Shock," the culture wars, and the resurgence of fascist ideologies in response to the overwhelming pace of change.
The Culture Wars and Future Shock
The culture wars are characterized by deep divisions and conflicts within societies, often stemming from differing perspectives on issues such as immigration, gender roles, and economic inequality. As the pace of technological advancements has increased, so too has the rate at which society must adapt to these changes. This rapid transformation can be disorienting for many, leading to feelings of alienation and disconnection—feelings that Toffler described in "Future Shock."
As AI technology continues to advance, it raises concerns not only about the potential for the emergence of superintelligent entities but also about the societal changes that such advancements will inevitably bring. The speed at which AI is developing can be intimidating, and the inability of individuals and institutions to adjust to these rapid changes can lead to the escalation of culture wars.
The Rise of Fascism in the Age of AI
In times of uncertainty and rapid change, it is not uncommon for people to turn to ideologies that offer a sense of stability and familiarity. Fascism, with its emphasis on nationalism, traditional values, and authoritarian leadership, can provide a comforting sense of order in a world that feels increasingly chaotic.
The rise of far-right political movements and the resurgence of fascist ideologies can be seen, at least in part, as a reaction to the disorientation and alienation brought on by the rapid pace of not only social change but also technological advancements. The desire to return to a simpler time, where traditional values held sway and life seemed more predictable, can be an attractive prospect for those who feel overwhelmed by the uncertainty that accompanies rapid societal change.
However, the allure of fascist ideologies represents a clear and present danger. The authoritarian nature of these beliefs, coupled with their propensity to scapegoat marginalized groups, poses a significant threat to democratic institutions and social cohesion. Moreover, this reactionary mindset can hinder societies from adapting to the challenges of the 21st century, such as climate change and global economic disparities.
Conclusion
The culture wars, fueled by the rapid advancement of AI and other technologies, have led to a resurgence of interest in fascist ideologies as a means of coping with the disorientation and alienation that accompanies such rapid change. It is essential that societies recognize the dangers posed by these reactionary movements...
Did you use chatgpt to write this?
Existential threat > petty human politics.
No, but I did use it for editing suggestions ( as I would any assistant). I write my own material. That said, you are welcome to your own opinion. I happen to think a lean right into fascism isn't petty human politics. Thanks for your comment, have a great day!
What edits you can also influence you.
Everything is petty compared to human extinction. We might not agree on many things, but we should all agree that we want humans to not go extinct and be happy.
I have children. I want them to have grandchildren and to be and to look forward to be artists and engineers.
I don't want AI to replace them.
Of course you don't. There will likely be "certified" human-made art and... the other kind. Buyers can choose. That said, I prefer not to live in a fascist world, where your kids have no rights. The Luddites smashed the printing presses because of automation, and the buggy whip folks did what they could to stop automobiles, but... those professions were essentially lost. The world will be a very very different place in 30 years, with AI influenced Crispr leading the way in the new "designer" humans. Living to 150, with 150 IQ, disease resistant, etc, etc. All that is coming, and very soon. The pace of change will cause massive upheaval long before AI extinction events... Again, thanks for your comment. Lets leave it at this.
Just because something is intelligent doesn't mean that it is good for us. We are more intelligent than chimpanzees and dolphins(and we like them!), but we kill thousands of dolphins every year via fishing and have driven chimpanzees to extinction in areas via habitat destruction.
AI won't be magic. They will have adversial issues with each other like we do, and disempowered humans will do poorly. See openAI itself:
https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom
Humanity must survive and my children must live. So if your answer is "well, humanity must die off and be lost and be replaced," then I doubt very few people would agree.
Have a good day.
Of course you are right. You win! Fantastic.