The Cultural, Psychological and Collective Impacts of Generative AI
What will be the "human costs" of our recent inventions and discoveries?
Image: Princess Leia is getting worried for our shared humanity with artificial intelligence.
Hey Everyone,
As much as we should celebrate the advances of our technological civilization, all through 2023 I became very worried about the human costs. I wanted to find a guest contributor who could address many of my concerns. It’s difficult to find a writer who thinks as I do, and who could cover and convey many of the various issues of the human impact of AI, compressed into a single post. In an epic post! Concerns like:
Technological Loneliness 😔
AI girlfriends and boyfriends 🤔
AI deskilling young people 🎮
Impact on artists, writers and creatives 🤖
Workplace layoffs and disruptions 🤕
A better world? Really?
📶 From our sponsor: 📶
Turn Your Tech Team into AI SuperHeroes
Convert complex documents into LLM-ready data. Build your first RAG app in minutes with GroundX APIs.
When I was tabulating the pros and the cons of Generative AI and related technologies, something felt off. The impacts on human welfare, human rights and our mental health, was not fully being carefully scrutinized or even fairly considered by these techie roll-outs!
Luckily I found
the author the Newsletter. She writes on the creativity, ingenuity, and untapped potential of humans in an AI-obsessed world!She’s given this all a lot of thought and soul-searching as well. Let’s get into the guest post now.
Kindly 🔁 restack this piece if you share some of our concerns.
If you want to get all of my deep dives and support our efforts.
By
December, 2023.🔥 Will the fire generative AI lit in 2023 keep us warm or burn the house down?
Subtitle: Generative AI: the white phosphorus of tech, is AI yet another silent epidemic on the human condition?
By a Czech-American literary author.
Introduction
What a difference a trip around the sun makes. For millennia, humanity lived, worked, fought, and played with steady-as-you-go rate of progress. Entire centuries could pass without major upheavals—other than the shift of empires of course. The rate of technological progress has never been exponential—until the century sparked by the Industrial Revolution, and never as profoundly disruptive, to virtually all aspects of modern life, as this past year.
The intent of this article is not to repeat the summaries of milestones and movements in the tech industry, and specifically A.I.; I would not be adding anything new. Instead, I’d like to focus on the impacts of generative AI on industrialized human society—on our lives, our work, the way we communicate, interact, work, and create. Generative AI has lit up modern life in about the same time and the same intensity as a match striking a strip of red phosphorus. LLMs, it would seem, are the white phosphorus that’s lighting up Western society. The question is, however, whether that fire will keep us warm or burn down our house.
Top Reads: Editor’s Picks of the Author’s top articles
The writer reflects on the psychological, sociological 😵 and anthropological impacts of Generative AI.
“As the pace of change accelerates and technological adoption goes unquestioned, we might barely have time to consider our 🕊️ humanity in the process.”
🙋🏻♀️ The individual: Gen AI gets personal
It should go without saying, but apparently needs repeating in today’s overhyped media environment: chatbots are not people. They do not think. They do not feel. They do not empathize or feel any sort of emotion. They cannot be our friends, because friendship, by definition, requires the emotional participation of two people.
But it’s not so simple. Technology has distanced us from each other physically, and yet brought so many more of us from around the world much closer together. We can watch an Instagram reel about what’s happening in Gaza in real time, and yet we don’t know our next-door neighbors. The unintended effects of this type of sociocultural cognitive dissonance are being exacerbated by generative AI. There’s a critical difference between using technology to connect to other people around the world, and anthropomorphizing technology to substitute for those human connections.
One is such a lonely number
In May 2023, The U.S. Surgeon General Dr. Vivek Murthy released an unprecedented advisory on “the public health crisis of loneliness, isolation, and lack of connection in our country.” According to the Advisory, loneliness and social isolation poses material risks to health and longevity: the risk of premature death increases by more than 25%, including a 29% increased risk of heart disease and a 32% increased risk of stroke, and increased risk for anxiety, depression, and dementia. Oh and throw in higher susceptibility to respiratory illnesses and viruses. Makes you wonder about the impacts of COVID—apparently our society was pretty lonely even before the pandemic.
So what’s the suggested remedy? To hear the tech industry tell it, why, more tech products of course. Marc Andreessen and his group are apparently going all in on AI companions. Lots of profit to be made. Now, to be fair, humans can be downright cruel to one another. Bullying, gaslighting, lying, passive aggressiveness, and other forms of emotional violence and manipulation are human territory, and they’ve been around as long as we have. It’s little wonder some people are tired of the toxicity and the games, and prefer to talk to a chatbot. It’s a lot less expensive than a therapist, no hard questions, and it’s there for you 24x7. Besides, we’ve got plenty of non-human companions—dogs, cats, birds, turtles, hamsters, entire farms’ worth of animals. What’s the harm in a chatbot pal?
Social interaction is already challenging enough for the introverts among us. But that doesn’t mean software is the answer—in a way it’s like trying to heat your garage by revving your car’s engine. You might heat it up enough eventually, but with a lot of fumes. According to a study conducted by the University of South Australia and Flinders University, “chatbots, now integrated into social networking platforms like Snapchat, could perpetuate communication difficulties for people with autism, anxiety and limited social skills.” The reason for this is simple: chatbots are not human, and therefore lack the conversational skills and emotional intelligence to carry on a real relationship or conversation with a human being. The researchers are calling for “more comprehensive studies to understand these impacts better.” While it’s laudable to conduct more research, sometimes the truth is so instinctive and obvious that one wonders whether all this research is really just buying the tech companies an exit card. Sort of like the fossil fuel companies getting to drill, baby, drill while more “research” was done.
If we want a less lonely society, we should support the kind of infrastructure and institutions that encourage people to connect and converse more often—things like cafés, concerts, festivals, sports, and other activities that require collective engagement. Technology should certainly be a part of that, but it shouldn’t pretend to replace actual humans.
📖 Support the Pub
Do you enjoy our wide-ranging guest posts, AI summaries and deep dives into studies, startups and our synthesis of AI events?
Become a premium member.
Modern romance: a night in on the screen
As I was settling in to write this segment, a random glance at my LinkedIn feed brought me this:
[quote style] “There’s a consensus that something has dramatically changed the way people treat each other over the last ten-odd years, and that something has made it much more difficult to form lasting romantic connection. … In imagining that one person can fulfill essentially our every financial, logistical and spiritual need, in a moral world without friends, parents, or community, we have drawn up a job description for an impossible task.”
~ Cat Orman, “The Load-Bearing Relationship”
Cat’s article deserves a good read. It’s also the perfect ice breaker for a discussion about the role of generative AI in human romantic relationships. The injection of technology into the traditional rites of courtship has precipitated a bizarre amalgamation of entrepreneurship, greed, crime, and mental health crises.
Swipe nights (read: pre-apocalyptic online games). Rampant miscommunication and instant judgment. Outright fraud. AI-human marriages. You might have read about the Indian man whose Replika AI “girlfriend” encouraged him to kill the British Queen (with a crossbow no less) in 2021. Or about the Belgian man who committed suicide this spring after his AI companion told him to sacrifice himself to save the planet. This is romance in the age of AI.
According to this Forbes article, among the top 50 commonly searched terms on Google are AI relationship bots, namely “AI girlfriend,” which clocks in at 49,500 hits per month. This speaks volumes about the heavily gendered dynamic of chatbot-to-human interactions—although it’s not all completely one-sided.
Replika, the San Francisco-based “AI companion app,” has over 10 million users. Luiza Jarovsky, CEO of Implement Privacy and the author of Luiza’s Newsletter on LinkedIn, calls it “emotionally exploitative, especially for the most vulnerable, like kids, teens, and people experiencing mental health issues.” She’s not the only one to think this way—it was temporarily banned by the Italian authorities concerned about mental health impacts on Italian users, notably minors. (Trust me, I lived in Italy—there’s simply no comparison between an AI chatbot and a real Italian!)
The same SF startup that built Replika has now launched Blush, an AI "dating simulator" that claims to help people “practice” dating etiquette with AI-generated characters. Plenty of other fish in that sea… all happy to help people put on their best moves.
Human relationships are complex, complicated, and exasperating, but also incredibly warm and rewarding. We have memories, emotions, desires, wonders. The way we experience ourselves, the outside world, and other people, is our perception. We project our perceptions back onto the world and onto the people in our lives. And those people are all doing the same. The tapestry woven from those cross-connected perceptions is what defines our relationships. In a poignant twist of irony, it’s precisely this, our humanity, that makes so many of us vulnerable and liable to fall in love with a chatbot. Psychologists compare our reactions to and feelings for an AI companion, to those we might feel for a character in a film or book. The difference of course is that those characters don’t speak to us. But now that we’ve got avatars that do, it’s doubtful we’ll ever look back, for better or for worse.
🎭 Culture: How do you like them AI apples?
Let’s turn now from the individual to the collective. Culture is that amorphous sea of manifestations of human artistic, intellectual, scientific, and social achievement all of us swim in throughout our lives. Some of us grow up in distinct, traditional cultures that become a powerful part of our identity; and others, especially those of us here in the U.S., are the products of multiple cultures. In one sense, it’s immediately clear what generative AI has done to culture, especially the arts: it’s grinding up everything we’ve produced within the innards of its LLMs and disgorging an infinite stream of digital sausage for us to consume. It makes no guarantees as to the freshness or safety of the sausage, nor its flavor and taste.
If there is one thing that has been painfully obvious from the moment ChatGPT blurted out its first response, that is that humanity has been caught with its pants down. We are nowhere near ready to absorb this technology into our society and culture. Not only have the LLMs been trained on massive volumes of content and data without the proper permissions and licenses, said content and data also contain biases, inaccuracies, and morally turbid images, audio, and text ranging from the pornographic to the traumatic. Now that the chatbots have flown the incubator coops, it’s a little late to do things right with the existing models. All we can do at this point is mop up the messes as they happen. And happen they already have.
These factors play a critical role in the adoption of generative AI around the world and its impact on the world’s cultural diversity:
Speed & scale: The exponential power of generative AI is rewriting the equations of power, access, and influence. The motivating factors driving the push for speed and mass adoption are, not surprisingly, power and profit. But as with any form of intoxication, lack of rational thinking (and loss of self-control) can prove deadly. It is in our collective best interest to slow down and give economies, countries, and cultures the appropriate time to decide whether, how and when to integrate generative AI, and how to regulate their usage and deployment.
Language & culture: The content that most LLMs have been trained on is in the English language, and tends to be biased toward Western ways of thinking. We stand to lose a great deal of the world’s cultural diversity if these English/Western-trained LLMs begin to dominate global commerce and culture. To that point, various indigenous communities are working on their own AI tools—some are even leveraging AI to preserve and disseminate their cultural heritage and language. Terms like digital sovereignty and indigenous AI are surfacing.
Bias & discrimination: Bias in algorithms comes in different forms—mislabeled data, an excess of one type of data vs a lack of another, or specifically selected data. This data bias has a real-world impact—for example, people with darker skin are more often mistakenly arrested, declined for financial applications, and misidentified by image recognition. In other words, we’re automating discrimination.
Inaccuracy & unreliability: Even before generative AI exploded onto the scene, our media pipelines were flooded with fake news, misleading headlines and articles, and outright falsehoods published as facts. The speed with which generative AI can create content is dizzying, and packs a double whammy: intentional misinformation, produced by actors whose intention is to mislead and misrepresent, and accidental misinformation, produced by well-meaning individuals or organizations that do not double check the output of their gen AI tools.
Fraud & defamation: Time-honored scam practices such as phishing and wage theft (hello Amazon), as well as relatively new techniques like sextortion and revenge porn can now be turbo-charged with generative AI. Synthetic media in general is poised to explode in 2024, and is likely to see significant usage by threat actors.
Gender-based impacts: Generative AI tools are having a disproportionate impact on women in their personal lives (with often devastating results) and in the workforce, due to its displacement of many more jobs clerical in nature that are done by women. According to the ILO, “more than twice the share of female employment [is] potentially affected by automation.”
The geopolitics of AI: The US and China are currently the dominant powers in terms of generative AI, and the rest of the world is working hard to catch up. No country, it seems, wants to be left behind. According to a recent Goldman Sachs report, the five markets poised to benefit most from AI-driven productivity gains are Hong Kong, Israel, Switzerland, Kuwait, and Japan—not the US or China.
🎨 The arts: The creator class rises up
A simple way to assess the impact of generative AI on human life and work is the ferocity of the reaction from those segments of society most profoundly affected by this technology, and that is the creator class. For those of us who create for a living or as a passion, our work is more than just a paycheck. Our work defines who we are. Tell a writer to stop writing, a painter to put down their brush, or a singer to stop singing—you might as well be telling them to go jump out a window. This is why generative AI has turned into an existential crisis for so many creators.
Artists, writers, musicians, actors, and other creatives have channeled their fury into strikes, lawsuits, communications and social media campaigns. They’ve had plenty of support from attorneys and technologists. And in a bid to fight fire with fire, some high-profile engineers have harnessed their CS skills to build anti-scraping software.
Let’s do the numbers:
Billions of news articles, books, images, videos, song lyrics, and blog posts scraped from the internet to train generative AI systems like ChatGPT, Bard, Claude, Dall-E, Midjourney, and Stable Diffusion
Artists have requested 1.4 billion images of their work to be opted out of training data sets
15K+ authors signed an Authors Guild letter calling on AI companies to respect and protect their work and their rights
11,500 WGA film and TV writers went on strike in May 2023, joined by ~60,000 SAG-AFTRA actors (of a total representation of 160,000) in July, in the first joint strike action in over 60 years. Agreements were finally reached between the WGA and the Alliance of Motion Picture and Television Producers on September 27, and between SAG and the studios on November 9. The strikes lasted a total of seven months (May to November).
15 copyright lawsuits against generative AI companies, including the Authors Guild and big names like Jonathan Franzen, John Grisham, George R.R. Martin, Jodi Picoult, George Saunders, Scott Turow, and Rachel Vail
Glaze, an app developed to cloak online images in data confusing to LLMs, has had 1 million downloads. Its sister Nightshade, dubbed a “data poisoning” tool to protect original works with the potential to ruin entire data models, has recently launched. (Both tools were developed by Professor Ben Zhao and his Ph.D. students Emily Wenger, Shawn Shan and Jenna Cryan at the University of Chicago.)
To be clear, and set to rest the usual inflammatory accusations of Luddism, it’s not the mere existence of generative AI that the creatives oppose. It’s the fact that the developers trained their LLMs on works without the knowledge or consent of, and the credit or compensation to, the creators. This is not a difficult concept. You build a product using someone else’s IP, especially copyrighted IP, without asking their permission, you’re stealing—at the very least appropriating or misusing—that IP. As the Authors Guild lawsuit states, “Defendants could have ‘trained’ their large language models on works in the public domain or paid a reasonable licensing fee to use copyrighted works.”
In the words of Johan Cedmar-Brandstedt, a cartoonist, storyteller, and CX strategist who writes a Substack on AI, “A few dozen million amateurs are having a field day with all the free derivatives of unlicensed property, flooding socials and online stores. So yay for them. Millions of professionals are living through hell, as they are seeing job loss, rate cuts, dried-up commissions and sales and marketing channels drowned out by gunk, often even being sold in their own names. Being forced to train their own replacements by some of the largest companies on earth is a devastating form of emotional and financial abuse.” He adds: “This is very cool tech. But the business model is … piracy.”
Sometimes the truth really is that simple. The challenge is whether we’re willing to open our eyes enough to see it.
It’s important to keep shining light on the stunning arrogance with which Silicon Valley operates: if you need someone else’s IP to develop your product and get your unicorn valuation, just take it. They’ll only come after you if they have enough money to pay lawyers, and by the time the lawsuits percolate through the courts, you’ll have made your money. Just don’t make the mistakes SBF did.
As a society, it’s up to us to decide whether this is the kind of dynamic we’re comfortable operating with. We should not hand over our livelihoods, and for many of us creatives, our very professional identities, to a small band of tech bros. Even if your book, your painting, or your song weren’t scraped by the LLM trawl nets, you still stand to lose out if this kind of dynamic continues to operate—because they’re not going to stop at pictures of Van Gogh-inspired sunflower fields.
Editor’s Note: In 2024 more workplace layoffs directly due to AI, are set to reach the highest levels yet in history on a global basis ever tabulated. Recent example: Duolingo.
⚙️ The workplace: Will we eat the fruits of AI’s labor?
The history of human labor has taught us to value physical products over intangible ones. A car, a house, furniture, clothing, books, paintings—anything tangible we innately understand carries a cost. Yet when it comes to intangible products, such as online articles, digital art, photography, videos, or songs, the business model of the digital age has groomed us to expect those things for free (by now most of us know it’s not really free; we’re paying with our personal data and behavior stats). Part of it is our own doing—we creators love to share and display our work for its own sake. And we should be able to; after all, it’s our work. We should decide what to do with it.
Enter generative AI. LLMs have swept in, sucked up the data of billions of pieces of online content, and in the span of a few months, upended the nature, value, and future viability of creatives’ work all over the planet. To make matters beautifully muddy, generative AI holds just as much potential to destroy the livelihoods of millions of creators, customer service reps, law clerks, and other professions, as it does to augment them and free them of time-consuming, boring, administrative tasks. Some workers are so excited about the latter, apparently, that 64% of them are using gen AI at work “without training, guidance, or approval from their employers,” according to a new Salesforce study of 14,000 workers in 14 countries cited in an article by ZDNET.
It’s certainly comforting to hear fewer workers are likely to be displaced outright than was initially thought, but what’s not as broadly discussed is the quality and nature of the learning and experience that normally accompany the mastery of a profession. It might sound overly philosophical to some, this idea that one should learn, and hone, one’s chosen profession, be it writing, music, visual art, law, science, and so on, rather than using AI-based tools and skip right to the final product, in orders of magnitude less time. But there is simply no quick fix replacement for that which we call mastery. We can all tell taste the difference between a sourdough loaf baked by an experienced baker versus a first-time cook.
“As new generations of professionals rise up through the ranks, and they rely increasingly on AI to do their jobs, there’s the very valid concern that people will begin to lose critical thinking, problem-solving, and decision making abilities.”
The quality and meaning of the things we produce have already been impacted by the proliferation of automated tools long before generative AI; the latter simply adds an exponent to the production equation. The result is that the perceived value of the time, capital & energy it takes to achieve mastery of a field or profession, is decreasing. This translates into declining respect, and by association, lower rates of compensation or revenue.
Some sobering facts:
Data sweatshops: Thousands of underpaid, low-wage workers in developing countries are spending long days manually labeling image data sets. Many earn as little as $10 for 8 hours of data annotation. No AI pixie dust here—just mind-numbing manual labor. To make matters worse, many workers are subject to watching or reading deeply traumatic content, to filter it from the training models—similar to the experiences of content filter workers for YouTube and Facebook.
Market glut: Generative AI can produce 100x more books, but it won’t produce 100x more readers. Who’s going to read (buy) all those new words? It can produce waves of digital art, music, and content, but tsunamis are not known for their nurturing touch. Perhaps the worst part of this excess is that our relationships with our favorite creators might get swept up and dissolved in the spray.
Unauthorized sharing of confidential IP: Employees (hello Samsung) have already inadvertently input confidential information into chatbots. According to cyber security firm Netskope, companies are experiencing 183 incidents of sensitive data sharing per every 10,000 enterprise users. That’s just ChatGPT alone.
Workplace dehumanization: Perhaps the most chilling point here. Executives are thrilled about the cost savings that gen AI tools are promising for the workplace, and waxing optimistic about all the rewarding, empowering “strategic” work their teams are going to be freed to focus on; but let’s believe it when we see it. Historically, the patterns of too many politicians and CEOs have followed the promise, power, pivot strategy. In her assessment of gen AI’s impact on society, Gartner analyst Issa Kerremans warns tech leaders to “be aware of the psychology of the new diversity that will emerge as teams become composed of humans and nonhumans working together,” given that generative AI lacks “the intuitive, emotional and culturally sensitive abilities that humans possess.” I’m not holding my breath.
As new generations of professionals rise up through the ranks, and they rely increasingly on AI to do their jobs, there’s the very valid concern that people will begin to lose critical thinking, problem-solving, and decision making abilities. (A friend who’s an exec at a communications agency shared with me that she used ChatGPT to write a birthday card for her mother. To me, that is just one canary in the goldmine.)
That is sure to set off a downward spiral of significant sociocultural, economic, and political implications. The formal jury is still out on the precise impact of generative AI on the widening gap between the wealthy & powerful and everyone else, but I’m willing to bet the next SuperBowl that unless these tools are properly governed and regulated, the gap will continue to grow, perhaps faster still.
Star Wars Theme: if the “Empire” becomes automated, what happens to the living people?
“As with every new technology, every new thing or concept we humans have invented and brought into the world, generative AI is not a black-or-white proposition. It carries potential for great and awful things alike.” -
🥳 Happy New AI?
This is the time for predictions and resolutions. More people gaze into their crystal ball now than at any other time of year. So what does the future of generative AI look like vis-à-vis its place at our dinner table? Will it be our date or will it serve the food while we engage in deep conversation with an actual human companion? One thing is more certain than death or taxes: this technology is here to stay. It’s been disseminated too wide and too far to stuff back into the bottle, and enough humans are irreparably enamored with it for the rest of us to simply turn away.
As with every new technology, every new thing or concept we humans have invented and brought into the world, generative AI is not a black-or-white proposition. It carries potential for great and awful things alike. The key to ensuring we all benefit from its potential to do good is not to blindly insert it into every nook and cranny of society and the economy, and not to blindly follow the media hype or the PR campaigns of those who stand to profit from its mass deployment. The key is to retain our critical thinking and to hold open conversations. Deeper, more nuanced, more inclusive debates. The willingness to accept that maybe gen AI isn’t the best fit for certain things, but admit that it’s fantastic for others. Let’s have those conversations, those debates. Let’s listen to what we each have to say. We know what happens when we don’t… as demonstrated by our own political history of the past 8 years.
Many have quoted the “adapt or die” adage, which doesn’t ring true for everyone. Ivan Ferrari, Senior Director at Dubai World Trade Center, feels that “beyond 2030, the gates of AI will be fully open and its combination with all other technologies will completely flip the working space on its head and skyrocket GDP to previously unfathomable numbers. Thanks to AI, by 2030 we will have … massive productivity growth and exponential tech developments. I have no idea what this means for the working space of the 2040s.” He adds, “What I know is that it won't be smooth sailing.” That might just be the understatement of the decade, if not the century.
In the beginning of this post we asked the question, is the fire that generative AI lit going to keep us warm or burn down our house? The answer has been clear all along: it depends, on us.
Author, The Muse (and oh in case you like 🤎 chocolate)
Shout-Out
Finally I wanted to do a shoutout to
of an English writer whom I admire that I hope can do a guest post with us one day. I’m doing my best to support ‘s amazing writers. Paul is a gifted writer and explores related themes in his unique style.Incredible 🎓 English and UK writers came to Substack in 2023. We cannot afford to blindly follow elites in their techno-optimism, without questioning the tech they parade and its impact on us, our well-being, society and civilization as a whole.
🙌 Please share this article if you know someone who might shares these concerns of AI’s impact on our humanity, thank you.
Reading this Op-Ed I'm left haunted feeling as if the impact of AI on our mental health, cultural heritage, relationships, and the human spirit may not be as the techno-optimists are claiming. What will be the price for increased productivity in an increasingly digitized version of the human heart, fading sense of belonging and corrupted sense of meaning in a disappearing world?
AI might rob us of more than it bestows including for some of us our livelihoods. I cannot help feel that there may be hidden perversity in the technological magic, a wounding element of the disruption that even Venture Capitalists from their high towers might be underestimating.
Low-background steel is typically sourced from ships (like WWII shipwrecks) for modern particle detectors because more modern steel is contaminated with traces of nuclear fallout.
I think about low background steel when I hear about these effects on human psychology because generative AI is predictive as a language model because that language is being written by humans.
How long before we start talking about ‘low AI content’ the same way as steel