The Great Disconnect
Loneliness and isolation in the age of AI: another pandemic or part of progress?
“Dancing at the End of the World” - Midjourney
Friends,
Technological loneliness is an increasingly harsh reality for our mental health, in particular young people.
Being a human can be lonely. But is it getting worse the more time we spend on screens?
Loneliness is a dire warning sign, but with technology keeping us sometimes in unhealthy habits and society minding its own business, it can be hard to fix. How do you “fix” a hole inside in an increasingly robotic world?
Feeling alone has been linked to adverse health outcomes including heart disease, stroke, cognitive decline and low immunity. Some studies compare social isolation to mortality risks such as smoking. More recent research found links between loneliness and Alzheimer’s disease.
I asked
to tackle this difficult piece in an Op-Ed as a guest post on this Newsletter. She writes The Muse: On the creativity, ingenuity, and untapped potential of humans in an AI-obsessed world.She argues with the adoption of Generative A.I., prompt engineering and copilots in business, we should always put people first.
FROM OUR SPONSOR
Designing Large Language Model Applications | Download the early release chapters of this O’Reilly generative AI ebook, compliments of Mission Cloud.
Transformer-based language models are powerful tools for solving various language tasks and represent a phase shift in natural language processing. With this book, you'll learn the tools, techniques, and playbooks for building valuable products that incorporate the power of language models.
To receive more deep dives, start a free trial, go premium.
For a steep discount join as a team or group and write it off to your learning and development fund of your organization (ask your manager if applicable).
By
Summer of 2023,You know when the Surgeon General writes an OpEd about loneliness in The New York Times, things have taken on the color of ants, as they say in Latin America1.
It’s not just the Surgeon General’s OpEd. Headlines all over the media ecosystem have been blaring about the mental health bomb of loneliness and isolation for several months now. It first blew up during COVID, when the world was shoved nose-first into a sea of N95 masks, everyone went online for everything, and cities turned into ghost towns. We disconnected in-person and reconnected online.
Much of that seems like a distant nightmare now. But not only has the COVID tide not fully receded, there’s a tsunami forming on the horizon that might make that quarantine feel like a kindergarten field trip. The robots are coming, the headlines now warn. AI is going to take not only your job, but your last few shreds of sanity and whatever connections and relationships you had with other humans, too.
During COVID, the skyrocketing ubiquity of social media algorithms in our lives turned into an overdose of epic proportions. Where social media dug a trench, generative AI is going nuclear. It has the potential to displace entire professional sectors; gut the creative class; re-thread our communal relationships with doctors, educators, and law enforcement; dissolve the nuances of romance; and turn our sense of what it means to function in human society inside out.
But before we all run screaming for the hills, let’s unpack the thing that simmers at the core of AI’s power to disengage society: the Great Disconnect.
Typing in prompts instead of working with your creative team to design a videogame, sexting with a bot instead of a hot human, or scrolling through digital galleries of beach sunsets instead of throwing your camp gear in the car seems like the perfect way to dilute life into a plate of cold broth. If you’re concerned ChatGPT and its bot friends might uproot your life and career at least a little, you wouldn’t be wrong. But you wouldn’t win debate class, either. The visible spectrum isn’t just black & white, there are more than a few ways to peel an apple (I feel you cat people!), and the enigma of superposition isn’t limited to quantum physics.
What isolates and depresses you, might empower and delight someone less able or fortunate. What fragments and disconnects in one scenario, might unify and bond in another. Perhaps gen AI’s true legacy will be to make us all realize just how disconnected we already are, how much worse it can get, and spur us to turn the loneliness epidemic around. But that’s only if we’re willing to a.) see it; b.) accept our responsibility to address it; and c.) addressing it.
Read Articles by the Author - see the archives -
Great, but why should I care?
Fair question. I don’t expect this essay to resonate with everyone. But if you’re a startup founder, CEO, or manager in the process of integrating AI into your operations, you might want to know how to keep your people motivated, performing at the top of their game, and feel respected. If you’re a techie, wouldn’t you want to know how not to be rendered redundant, and why people are not doing happy hour anymore? And for the creators out there, as much outrage, frustration, and anxiety you might be feeling about generative AI, it is more strategic and useful to educate yourself about AI, how it works, where it fails, and how to ensure creators continue to thrive.
Finally, if you’re any kind of human at all, unless you’re completely unplugged—in which case you’re probably not reading this post anyway—the worst thing you can do is dunk your head in the sands of denial. In the case of this formidable technology that’s poised to upend global society, ignorance is anything but bliss.
OK, but give me some context please
First off, let’s clarify the difference between loneliness and solitude. Solitude is the sense of contentment or even joy when being alone, present with one’s self. Those of us who write, paint, compose music, or engage in other activities that do not require communal participation, for example, often prefer solitude to do our work. I’ve never been able to join a writers’ group because I need to write alone. Loneliness, on the other hand, is a sense of isolation that persists whether or not there are other people around you. It’s that sense that something deep within is missing or out of tune, and perhaps even lack of a fulfilling inner life.
Personal isolation and loneliness have been a focus of concern for the mental health sector for some time, certainly prior to the release of generative AI. There’s even a documentary film that bears the same title as this essay (I discovered it after deciding on the title). The Great Irony of the Great Disconnect is that the very technologies designed to connect more of us across the world, such as email, smartphones, and social media, did connect us but they also isolated and alienated us, each new technology driving us further and further apart. How can this be? Three critical shifts took place.
First, the foundation of contact stopped being about communication, storytelling, and exchange, the way human societies have connected for millennia. It turned into likes, followers, heart icons, GIFs, and other digital compliments and feel-good indicators. Slowly, insidiously, our thoughts, opinions, and musings, and more crucially, our mistakes and unintended impulses began to be published for the entire world to see, and judge, and respond to, and, AND… reshaped as data. We’ll come back to this point, because it’s foundational to the discussion about generative AI.
Second, the opinions of everyone reading our posts, listening to our songs and voices, and viewing our art and photos and videos, were given public life. Never before were creators—here I use the term “creators” in the most generic sense of the word—given such wide and deep access to the reactions and sentiments of the public, whether that public was their intended audience or not.
Imagine what the Instagram feed of Cleopatra would have looked like! Assuming every inhabitant of Egypt had a connected device, the most she would have had is 2-4 million, a mere 0.95% of Beyoncé’s IG. Then again, Beyoncé doesn’t have her name carved in two-thousand-year-old sculptures.
Third, we began to connect at lightning speed with people we didn’t know in real life. Those new connections were no longer based on in-person, multi-year relationships, but digital representations of people’s lives, carefully—and not so carefully—curated. We went from getting to know the people we talk to over time and through experience, to making flash judgments on the basis of a single post or comment. As we’ll see, these three shifts have made a perfect storm of disconnection, alienation, and loneliness that generative AI will make quick work of if we’re not paying attention.
Datafication of the fully expressive human
The first shift in the way we communicate and connect, as discussed above, has to do with the mutation of the value and meaning of our expression. Digital communication tools, whether AI-driven or not, distill the words, images, and sound we express online into data. This data takes on various forms known as attention metrics, such as:
Comments
Likes/dislikes
Icons for somewhat more nuanced reactions, such “insightful” or “funny”
Shares & reposts
Impressions
Reach
Open rates and click-through rates
Bounce rates
Length of time spent on a webpage
Of course, the full range of human emotions and expression does not fit into neat little categories, and that’s always a problem when you’re running analytics and trying to tie user activity to some kind of benchmark. It’s a particularly important challenge if you’re using those benchmarks to drive traffic to your online store, social platform, or search engine, or otherwise keep your investors well fed and happy.
The biggest shift on the individual level is that we are no longer viewed as full human beings and bodies. We are “users.” “Consumers.” “Subscribers.” “Followers.” “Customers.” “Use cases.” Why not just say it out loud: we are Data Points. Stats. Numbers. In short, we have been datafied. Ironically, this datafication extends to everyone working in the tech sector as well, from investor to startup employee. They’re human too—and they post, comment, and browse just like the rest of the online world (albeit maybe a little less). This is the biggest step in the disconnect on an individual level: being human is an experience that is at once physically embodied, emotionally felt, and, for most people dare I say, spiritually traveled. When you consider yourself, or others, to serve the purpose of a data point, you’ve disconnected the humanity from the person.
Looking at it from the POV of the market, you need a way to quantify, analyze, and forecast the behavior, preferences, and opinions of all those annoyingly diverse and unique humans if you’re going to launch a (massively) profitable product or service. Well, you're in luck, because those annoyingly diverse and unique humans are also deeply tribal and constantly strive to fit in with their peers and communities, and so they’ve jumped right into the data pool, cold water be damned. We now routinely ask our readers to “please like, share, and comment below!” because we want data on them just as much as the algorithms do. The algorithms have trained us well.
Too well. The real reason we ask readers to like, share, and comment is not the data. It’s how the data makes us feel, and how easy the data makes it to get an instant result. It is a profoundly human thing to seek approval from our loved ones, friends, and now, increasingly, the rest of the world. It is an equally profound human thing to not want to have to do the work—the work of reading nuanced comments and opinions from fifty people, vs. a quick glance at that nice fat figure “42” next to the “like” icon atop your blog post. The number itself, of course, is relative—relative to how long you’ve been posting, relative to your overall email list size, relative to how many of your hard-earned dollars you’ve sunk into promoting that blog post. But it’s a concise, easy-to-grasp indicator of likability. What you miss, of course, is the real reason why any given person has “liked” your work. Some did it because they know and like you, and haven’t bothered to read the post. Others because they scanned the post and decided, in the spur of the moment, to like it. And some do it because they genuinely appreciated what you had to say. Now what about those readers who also read and appreciated your post, and maybe even told numerous friends about it, but chose not to ring that little bell? You’ll never know about them, just like Gabriel García Márquez never knew what the vast majority of his readers really thought of his work.
And so the algorithms have effectively succeeded in transmuting our natural proclivity for peer and community approval into the data points they can work with. The invention of the “like” icon was a brilliant, if brutally world-altering, idea. Too many of us now create “content” with the intention of “engaging eyeballs.” (Whatever happened to the rest of our bodies? The disconnect thus becomes digital dismemberment, too.)
How long—and deep—can we play the numbers game?
But why do we chase the highest numbers, instead of being content with whatever followers we’ve got? Because we can. Because the algorithms connect all of us in fractions of seconds. And because superlatives always impress: the fastest car in the Indy 500, the richest person in the world, the biggest predators on land and in the seas. The fastest growing app on the Internet (hello ChatGPT—no wait. Hello Threads, good-bye ChatGPT). So yes, we do numbers because assessing quantity is truckloads easier than trying to gauge quality and craft, and because we’re hard wired to seek and value resources, be they food, shelter, money, or yes, even those digital thumbs-up. In the latter case, digital status is a type of resource—it feeds not just our ego but expands the potential to earn revenue. (How some of us humans plan for that revenue is another matter.)
We can appreciate big numbers but we can’t process them properly beyond a certain size. Close your eyes right now and imagine a hundred-dollar bill. Now imagine a suitcase full of them. An entire warehouse of suitcases. How about a trillion hundred-dollar bills? Not so easy now, is it. Beyoncé can’t physically interact with every one of her 317 million Instagram followers either. I don’t pretend to sit in her head, but I imagine she doesn’t have the time to read all of their messages. 317 million does not represent the singer’s friends and peers; it’s a number that represents her success, her status on the world stage. Yet for all intents and purposes, Beyoncé is disconnected from her adoring tribe until and unless she pulls a few of them up on stage with her and puts her arm around them as they swoon. But what about the fans, from where they sit? How connected to their queen does any one of them really feel? Clearly, many do, as evidenced by their fierce loyalty and hordes of bee emojis. But is adoration true connection? And are the algorithms good for Beyoncé’s own mental health? Numerous sources would say not so much.
AI systems, on the other hand, can crunch unimaginable volumes of data, and they can do it 24x7. All they need is lots of processing power and a nice big data center with a steady supply of energy and water, the environment be damned. They are also completely unaffected by any of those pesky human emotional, psychological, or physiological consequences of excessive screen time or the nature of the content their own code serves up.
When you, a human, spend even just an hour or two scrolling through the output of AI-driven social media feeds—TikTok videos, LinkedIn posts, Twitter (X) threads, how much mental or emotional energy do you have left for you, your work, and the people in your inner circles? How do you feel after subconsciously comparing yourself and your life—personal, professional, financial—to what the screen dangles in front of you?
And if you’re not impressed with the big numbers on Beyoncé’s Instagram, try the 500 million tweets posted daily on the site formerly known as Twitter, and the herculean task assigned to its recommendation algorithm to distill them down to “a handful of top Tweets… [on your] For You timeline.” It involves a neural network of roughly 48 million parameters “continuously trained on Tweet interactions.” Such work is not for the human of brains. Here’s a fascinating rundown of Twitter’s distillery—er, recommendation process if you’d like to slide down that rabbit hole.
Whether and how to connect, that is the question
Many argue that connecting online is still connecting—and in many respects that is true. I have personally made numerous connections online that have turned into friendships, including here on Substack. In some ways, the quirks of AI algorithms seem similar to the quirks of everyday life—a chance meeting in the hallway, on the train, or in line at a coffee shop seems just as accidental as stumbling across a thoughtful comment you decide to chime in on and end up with a new colleague or friend. And just like in real life, the experience and impact of those connections depend on the intentions and character of both parties. If you’re looking for trouble, you can find it online as well as in the real world. Granted, you can’t hide behind an avatar in the real world. At least not yet.
The dynamic of connecting with total strangers in an online world, itself disconnected from the physical environment, and interacting with them through text, images, and symbols, has inverted our natural, innate forms of communication. One could say we’ve evolved. A more accurate assessment, however, is that our own technology precedes us. We certainly know how to use it—and misuse it—but we remain biologically wired for physical contact and in-person interaction. You don’t need a stack of research studies to prove this (but here’s one, and another, if you like)—just notice how you feel in your body the next time you meet a good friend for dinner, vs. chatting with that same friend via text.
The problem we have is not a simple question of quantity vs quality, or even AI vs human. The problem we have is one of nuance, empathy, and context. Everything we do online is getting reduced to ones and zeros, to algorithms following instructions designed to maximize the indicators of popularity and profit. Influencers are voicing their frustration about their content being served less to their own fans and more to random people based on algorithmic assessments of what content is likely to spark the strongest reactions from which user accounts. When those reactions are less than pleasant, that can impact the influencer’s mental health. In fact, we’re seeing rising numbers of celebrities speaking out about their mental health struggles fomented by social media, and that is helping to bring the topic to the fore of public awareness.
Civil debate and thoughtful conversations do still exist online, but they tend to be drowned out in an increasingly virulent online environment that might get worse before it gets better. The feelings of loneliness, isolation, and the ever-present FOMO (Fear Of Missing Out) that social media has wrought upon us are being taken to a whole new level by AI-powered algorithms and the automated firehose of content and targeted recommendations they’ve turned on. It’s one thing to feel pangs of envy about the Instagram influencer with the crazy-perfect dream house, but it’s another to stare down the black hole of your entire professional career in customer service going up in flames because it is likely to be made redundant by a much more cost-efficient chatbot. And it is quite another to have law enforcement break down your door and arrest you while pregnant and getting your kids ready for school because their facial recognition system glitched.
Social media platforms do not interact with you; they are platforms, digital canvases upon which you, along with other humans, paint your words and emojis and little dancing pig memes. AI-powered chatbots are different—they talk back. They effectively step out of the platform and onto it, wielding their own talking brushes, as it were. Up until recently, any time we received a digital communication, we could safely assume a human initiated and produced that communication. Now that chatbots have blipped onto the digital scene, that safety has begun to slip away. We’re aware bots write conversational threads, and that’s primarily because of a.) their still-stilted communication style and b.) the context of their delivery: a ChatGPT window, a chatbot on a company website, a Siri on your phone. Would you be able to tell, assuming the chatbot’s natural language functionality is advanced enough, whether any given Discord, LinkedIn, or Facebook post or comment was written by a human or a bot? Maybe you can now; just give it a few months.
If certain interests have their way with generative AI, there will soon come a time when you won’t be able to tell whether any text or voice on any platform has been produced by a human or a bot, unless you have an established relationship with the human.
The nuclear option
Since AI algorithms in general have been running beneath social media for a while, you might think this type of AI isn’t much different from other digital communication tools in terms of its impact on our isolation factor. But there is one simple but fundamental distinction, and that is the virulent, adaptive, and weaponizable nature of generative AI that other AI algorithms do not share. Specifically:
The ability to respond to your question or prompt in natural language (read: the ability to respond, period)
The ability to produce new content in response to said question or prompt, in multiple modalities
Training on billions of words, images, sounds, strings of code, and other content
The ability to develop synthetic data
The ability to analyze complex or voluminous data and find hidden patterns and trends
The ability to automate and accelerate a broad range of tasks and processes
This kind of power and speed should not be taken lightly; in fact, it has been argued that generative AI systems like ChatGPT should not (yet) have been made widely available to the public. A little too late for that now.
Content, the meat and bones of the online ecosystem, used to require human energy and effort. Teams of copywriters, SEO analysts, marketers, social media consultants, and product managers would spend weeks building marketing and media campaigns, launching them, and gathering the results. Now we’re seeing teams of 4 accomplish the work of 3x that number with generative AI tools and workflows. This is an extraordinary improvement in efficiency of time, cost, and scale.
So what do we do now
The AI cat’s fully out of the bag. ChatGPT & co aren’t the first AI interfaces to have impacted our lives, but this is the first time an AI system has captured the imagination and dread of the public in such profound and powerful ways. The Internet has been trawled for content gems without anyone’s permission, and literal tons of collateral damage float in the wake of the gen AI ship. Myriad questions and quandaries abound, from the professional to the very personal:
Do you hire a designer or use Midjourney for your blog post images?
Can you let that staff writer whose work you’ve always disliked, finally go and use Jasper instead?
Will you get sued if you generate a sound-alike song for your video?
Can you trust code generated by AI?
How do we define (and enforce) copyright now?
How can you tell an AI-generated dating profile from a real one?
Should I trust the voicemails I receive are really from my friends and family?
Is it ok if I want ChatGPT to be my BFF?
True to form, it’s the headlines that scream end of days that draw those clicks and shares, but the real issues are the ones with the quieter headlines. We need to worry about human misuse of AI systems much more than any AI-spawned apocalypse or conquest. Perhaps the most sobering reality that we will need to come to grips with is not what AI can do against us, but how humans can use AI to our mutual detriment, whether intentionally or unwittingly. Never has technology saddled humanity with such existential burdens. To borrow from the poet Robert Frost, the only way out is through the AI fire.
A few tips for the road:
Don’t outsource your creative or critical thinking agency to an AI system. Continue to rely on your experience, insight, and wisdom.
Always put your people first. If you want to leverage AI in your company, identify the workflows where it can best support, rather than replace, your staff.
Give them agency: involve your team in determining the best ways to integrate AI tools into your operations.
Don’t be seduced by the temptation of fast, cheap image generators. There is no replacement for skilled human artists.
Likewise, don’t be seduced by what reads and sounds like a sentient entity. There is no mind, heart, or soul behind your chatbot screen.
Never, ever forget there’s always a non-zero chance your AI dreams of electric horses.
Oh, and one more thing.
Thank you for reading. If you’d like to leave a comment, I’d be thrilled to respond.
~ Birgitte
Short Bio of Guest Author
Birgitte Rasine is the CEO of LUCITÀ, a hybrid content and communications firm based in the San Francisco Bay Area. She has worked with Fortune 100 and 500 companies, the United Nations, NASA, Google, and many other companies, non profits, educational institutions and government agencies.
She has given talks and presentations on three continents, and received awards for her literary work. In her previous career, she was a journalist for the Hollywood film industry, and in 2016, she joined the Google Assistant creative writing team and helped launch the Assistant in 4 international markets. Birgitte speaks 5.5 languages, serves as a chocolate judge, and writes The Muse, a Substack about human ingenuity and creativity in the age of AI. See her books.
The Muse, to write is human - check out the Newsletter.
What do you feel about the intersection of technological loneliness and the generative A.I. too-kit and consumer products?
In Spanish, the phrase is “se pone color de hormiga” (which literally translates to “it turns the color of ants”). It refers to a situation that is difficult, complicated, or that causes a certain amount of tension or anxiety. I’ve had ants in my house often enough (and once in my car) that I can tell you in no uncertain terms how on-point this phrase is.
I see patients remotely now. Incredibly convenient for … me. But the problem I thought might happen has - a two dimensional emptiness that emphasizes my own evolutionary psychology patient education material.
We are one of countless social species because this basic instinct allowed us to survive through various versions over millions of years. A social instinct has allowed us to find food, protect ourselves and procreate more effectively. We are wired for this at the DNA level - to communicate through affective resonance and pheromones, and to be physically touched. Lives in misalignment with basic genetic wiring become ill … and in this case empty/depressed, like Harlow’s monkeys.
As a poet and nonfiction writer, I can see how frustrated many writers feel. This debate over if we need writers anymore or if poetry is really useful for anyone if a prompt can give you a better poem than a human poet. This adds up to this anxiety of disconnection that Birgitte mentioned in the ssay. I also see some of my advertising clients use AI as an excuse to bargain with writers because "AI can do it faster and better".
However, I myself have been using AI in my work and see how they generate the content for work. They are at average quality, easy to copy and easy to find everywhere on the internet. There is lacking of unique point or strong human touch in it.
By that token, I struggle everyday to write a poem or an essay that helps me to grasp with the reality or to reflect my emotional state with the reality. Those valuable response of writing, just like physical exercise, can't be replaced to AI. Of course, I can ask AI to write me a poem to "show up", but I can't ask it to help me walk through the mental process that helps me expand my living experience. For that, as a selfish (and not so talented) writer, I still write for my own experience, I don't ask AI write me a poem. A tool can take over your daily life if you let it. But if you use it as a tool, it is a tool. I have been using AI for work in the last six months, I don't feel the loneliness of disconnected feeling between my work and myself.
I do feel at odd with some clients' conversations (as the essay said) but I guess that is part of the industry I work in.
Thank you so much for the thoughtful essay.