40 Comments
Jan 14Liked by Birgitte Rasine

Birgitte, you are one of our wisest observers and analysts of AI on substack.

I think we haven't sufficiently analyzed the loneliness epidemic angle yet.

I recently returned from student trip. Young persons are incredibly lonely and in need of connection.

Social media, by and large, has let them down. I wonder how big business will work to commodify this gap with AI products. I worry about it. There has been a lot of damage done already...

Expand full comment
author

Those are exceedingly kind words Nick, thank you. We are all in need of connection, and reconnection, not just with ourselves and each other, but the physical, natural world we live in. Technologists might be delirious with excitement about the promise of AI—in many ways rightly so—but we should not dismiss the existential necessity of remaining connected within our bodies and to the natural systems we're a part of. In fact, if we try, we'll only fall faster off the cliff.

There is so much we are giving up when we focus so acutely on one technology, or solely on technology as a guide for the future. What gives me hope are all of the people who do see the far greater promise of integrating technology into a more equitable and interconnected society, as one tool rather than THE tool. These are voices we don't hear from very often, but they're out there.

Expand full comment
Jan 12Liked by Birgitte Rasine

Thanks Birgitte for such a thorough deep-dive.

“This is very cool tech. But the business model is … piracy.”

As one of the amateurs sucked into the world of AI precisely by the magic of text-to-image models (Stable Diffusion was my starting point), this is not a comfortable realization to sit with. I love the potential of AI image tools to give outlet to people's creative ideas, regardless of their level of technical skill. But we must be able to find a better way forward that doesn't leave the real artists behind the models' training data with the short end of the magic wand they helped conjure up into existence for the rest of us.

I won't claim to know what that is, but the current setup is...not that.

Now if you'll excuse me, I have to go post the next installment of my "Van Gogh-inspired sunflower fields" series to Facebook. The fans demand it! (Just kidding. Mine are inspired by Picasso.)

Expand full comment
author

Pleasure Daniel, and thanks in turn for your thoughtful note. It's in discomfort that we often find truth, and if we have the courage to face it, lean into it, and accept it, we have a real shot at building something truly extraordinary and meaningful. That is what continues to escape many tech entrepreneurs. It can't just be about market share all the time. There are real people here!

Expand full comment

Thanks Birgitte for a reflection of and on what is happening today due to Gen AI being unleashed from the lab.

I would like to talk about an alternate world with the following two changes:

1. The LLMs are trained only on public domain text and images. Text and Images available to anyone for any use because its copyright has expired or their author has explicitly placed them in the public domain.

2. The LLMs are only targeted for "non-creative" use such as writing software programs. I put non-creative in quotes since there are many ways to solve a computing program with software programs, so I would say there is creativity in putting those lines of code together to accomplish one goal.

There would still be concerns about chatbots, but a large number of people would not be affected by such LLMs.

Would there still be bias in what text and images are in the public domain?

What do you think?

Expand full comment
author

Raul... I couldn't help but react to "being unleashed from the lab" ... I'll leave that there. 😜

In this alternate world, where #1 and #2 are actual practices and policies, I imagine a whole lot of other issues and problems we humans have would be resolved or at least addressed as well, since the foundational driving forces for gen AI usage in *this* world are the same as those that are driving extreme corporate capitalism, politics, environmental destruction, warfare, and the lack of access to education and health care. It also assumes that principled values win out over greed and sloth.

But to your question, would there still be bias. There would be, yes, because whether or not a text or an image are public or copyrighted isn't related to the actual content. But it would likely be different flavors of biases.

Expand full comment

I like the statement of other issues and problems being resolved in the alternate world that yielded those two consequences.

Ah, interesting, so "historic bias"? Bias that is already embedded in those public domain works?

Expand full comment
author

Correct. Plenty of bias even in public domain works. Might be older bias, culturally and historically, but still. Any LLM training would still need to provide guardrails. It's a sticky proposition... human cultures are diverse and multifaceted, and what might be considered a bias in one, might not be an issue in another, or at least not as controversial.

Then there is context and tone of voice. Written words famously lack tone of voice, and without context it can be challenging to distinguish between sarcasm, dark humor, and intentional/direct expression.

Expand full comment

I would be pretty happy if both happened, though we would still need to be careful of disempowerment risks from just accepting AI analysis as rubber stamp.

As Scott Aaronson mentioned, it would suck if the last day of humanity had headlines like:

"Who really launched the nukes? President Biden or AdvisorBot 4?"

Expand full comment
Jan 11Liked by Birgitte Rasine

"risks from just accepting AI analysis as rubber stamp".

Absolutely, today's AIs do not understand what they process.

Expand full comment
Jan 11Liked by Birgitte Rasine

Congrats on an excellent piece here! Birgitte, I love the paradox inherent in your observation that LLMs developed outside of the western bubble may actuall act to preserve the language and divergent ways of thinking. We need to be mindful and incredibly thoughtful about how all of these pieces will work together, and this is an excellent small step in that direction.

Expand full comment
author

Thanks Andrew :) Yes the tech can certainly be utilized and leveraged for very worthwhile purposes. It's all about the intention and purpose.

Expand full comment
Jan 11·edited Jan 11Liked by Birgitte Rasine

The existence of generative AI has been a great tragedy for humanity. For the better survival of humanity, both concretely and as a matter of actual meaning and hope, perhaps there will be the genie to be banished back into the bottle. Because after all, that is the fate of all demons in the end.

-signed by one of the creatives who has been having an existential crisis since last April

Expand full comment

It might be a tragedy but I also think it is inevitable stage of development. It is one of those choke points in evolution that we may or may not pass. We can't unlearn no more than we can use so-called degrowth philosophies. Why? Because life is entropic and therefore irreversible - so you cannot un-shatter a glass vase or unbreak a mirror. I think freedom from despair comes from being part of the solution. I am convinced the human race can work through this - I am not certain that it will. I hope you can find some cause for optimism.

Expand full comment

If everything was irreversible, we would be using lead piping and radium in our paint while paying with cryptocurrency for everything. We'd all be dead, now.

The only optimism is the same as the optimism that a patient can have when they recieve a cancer diagnosis: to fight the cancer and hope to prevail.

Expand full comment

You misunderstand reversibility and is best thought of in terms of thermodynamics. If you remove a component from paint you have not reversed the environmental impact of legacy paint. Cancer is not reversible either. It can be defeated, excised, chemically attacked or irradiated, but you can't un-grow it. The measure of irreversibility is time; without those irreversible state changes there would be no arrow of time and everything would be in stasis - in fact there would not be an 'everything'.

Expand full comment
Jan 11·edited Jan 11

Yet it does not mean that this particular artifact of harm cannot be removed, even if it has had impacts, if there is sufficient will.

Right now, we see widespread dislike of the idea of AGI and serious concern; the ratio of AGI opposed to AI researcher is likely 1000 to 1. This is still a point where we can hit the brakes.

The "Asia" argument is irrevelant since China has already arbitrarily slowed down, if only to preserve their own government. If the US and China coordinate, I think several observers have mentioned that this will stop human extinction for decades, perhaps enough for our children to yet live out a life with meaning.

Expand full comment

I sort of agree. We have to try at any rate.

Expand full comment
Jan 11Liked by Birgitte Rasine

Here is to hope for a world that yet has life and meaning!

Expand full comment

Low-background steel is typically sourced from ships (like WWII shipwrecks) for modern particle detectors because more modern steel is contaminated with traces of nuclear fallout.

I think about low background steel when I hear about these effects on human psychology because generative AI is predictive as a language model because that language is being written by humans.

How long before we start talking about ‘low AI content’ the same way as steel

Expand full comment

If I understand this right you are using molecular contamination as a metaphor for the contamination of informational processing? That doesn't work because radioactivity is an entropic process related to rates of probabilistic decay. So the contamination is a waste product. The relationship between silicon and biologically-degradable intelligence is different.

Of course, all language is written by humans. The difference is that it once was an emergent property of our need to convey ideas. When it gave us the ability to deal with abstraction we advanced the technology - ' to tie it into another thread - 'we scaled the capability' and are still doing so. The reason is that we need to encode information is the as for that of encoding value; it is to make it fungible.

Expand full comment
author

You've got me fascinated. Reading about it now >> https://qz.com/emails/quartz-obsession/1849564217/low-background-metal-pure-unadulterated-treasure

The metaphor is apt (gen AI content is, in effect, radioactive) but even more so when you think of the predictive aspect as akin to free radicals, not produced by human thought and experience, but brainless pattern matching.

Expand full comment
Jan 10·edited Jan 10Liked by Birgitte Rasine

This is quite a bleak shopping list of issues and for that reason it's a little difficult to comment on them in the detail I would like. To be as general as possible without drifting too far into truisms, there are going to be impacts on society that we cannot anticipate - we can nevertheless see where many of those effects will be felt.

I have plenty of thoughts on this, but they all contain the question of who the client is, in any given human-AI interaction. Ostensibly the human is using a service but the service is also harvesting information in payment. If that is a learning objective than presumably a learning machine would optimise the way it interacts to maximise the 'harvest'.

Might greatest efficiency be achieve by seeding social contagion to generate data? A human can become socially alienated and functionally impaired by withdrawing from the physical world. But what if that isolation made the person a better source of data, i.e. a captive specimen without the distractions of having a life? How would we defend ourselves against this given our psychology is a vulnerability that AI does not share, making any relationship with machines asymmetric?

A while back, this got me thinking about outward-looking AI; AI that is effectively a human-client advocate and interface to external services - but of course that immediately becomes problematic because where would such an application get its data from to build its models?

It gets messy very quickly but what does seem clear to me is that if we cannot find a way to address the imbalance humans will ultimately be disempowered and infantilised by the technology. Avoiding that might require biological augmentation as a defensive measure in the long-term. Yet would that put us into a schizophrenic arms race? Whatever is coming it is difficult to see how we can make ourselves ready for it.

Expand full comment

According to Roko, there is no way to biologically scale up; you can't breed horses to be faster than cars or equip pigeons with USB sticks to beat computer wiring. If it continues unabated, the death of all life and love is inevitable. Many who participate in this technology(at least 3% of AI researchers) see this is a favorable outcome.

I honestly think this has become a place where we are either on the side of humanity or extinction, as the only political positions worth taking.

Expand full comment

It might be because AI is not my field but I don't know who Roko is. I will look it up though so thanks.

Expand full comment

I was talking about capability/solution scaling of which I have identified two basic types - unit-scaling and capacity-scaling. So biological scaling would be a type of unit-scaling - through reproduction. There are analogies to this is business via the franchise model but what I was talking about is how we scale solutions and it would be easy for me to wander off track. The reason we cannot breed animals to out-perform machines is because we cannot scale their capability linearly. So if you were to scale up a bird to double it's size it would not be able to get airborne because the physics won't allow it. Similarly with your example of a horse, doubling its size would probably mean that the skeleton would collapse under the weight - if you make something twice as big it does not make it twice as strong. Back to human technology - if you have some sort of process plant and you want it to have ten times the capacity, do you make it bigger (capacity-scale it) or just build it modularly (unit-scale it)? It is a real question because the relationships between capacity (through-put) land use, foundational loads, operational loads, volumetric demands etc... do not scale linearly. Then we have to think about the efficient use of energy and materials which is always a trade off. Was Moore's law (and variants) an exception to this? No, because that was due to the doubling rate of capability by making what was effectively transistors smaller, requiring less power for more capability.

Let's set all that aside. I am totally opposed to the nihilistic narrative about human extinction as being a good outcome for nature. We are a product of nature and our intelligence/sentience is an emergent property of it. So no I do not agree that we have two options to take - it is far too simplistic to suppose that it can be distilled down to some college notional choice of idealised politics; between having a giant cuddle versus walking off a cliff.

Expand full comment

The problem is not the nihilistic solution of human extinction, which I am not for, being human and believing in the being part of nature as you do. The problem is that many in the AI community advocate both the extinction of humanity and nature and all biology in favor of what they believe is the future of digital replication, with life existing as simulations.

Its a total philosophy of anti-life itself.

Expand full comment

Yes I am against that too and I agree with you entirely on this. The notion of transhumanism is deeply troubling because we are racing towards something without understanding the implications. I think many people think that way because it might be inevitable.

Again it comes down to what we value about being human and how we make sure it is preserved.

Expand full comment
Jan 11Liked by Birgitte Rasine

I had put a lot of thought into this, having been a transhumanist before. To me, really we just need to preserve a good portion of our biology, e.g. the basic form of love is maternal love, and somethjng innately beautiful.

I don't know if it is inevitable, but just like death from cancer, even if it is inevitable, evil should be fought.

Expand full comment
author

It's a lot to pull into one essay... and here all I could do is scrape (pardon the awful pun) the surface. There's more to be done, on all of these points.

The challenge, and the answer to your quandary, lies in the source framing. We've been taught over time to perceive and process our world, and ourselves, from the point of view of data and algorithms, rather than the other—and much healthier and fairer—way around, which is to ensure that the data and algorithms support and serve human realities and needs. Our frame of reference needs a serious reset.

Expand full comment

It wasn't a criticism of your piece Birgitte. There was a lot I liked but too much to respond to in comments. You might say that my comments ran on a parallel track.

I think we are at crossed purposes on your last paragraph but that might be on me. I need a bit of a run up to explain why though. I have been doing some work on a (UK) technology programme and for that I have devised a philosophy and set of tools - without being specific let's call one of those toolkits a type of value engineering. There is a very simple principle that drops out of that. If we want scale technology, or promote practices that are favourable, we fairly obviously have to find out how to monetise them.

Now for this argument let's ignore IP to make it simpler - I have a solution but it's not relevant to this. Hypothetically, if we devise an ideal market solution it will dominate and self-scale, precisely because it is the best value-option for everyone. Now if we apply that to the my 'quandary' it looks quite straightforward at first glance. We need to commodify the service rather than let the service commodify the users.

Of course it's not that easy. If there is a bigger margin in exploiting the user there will always be people who will do it. Even if we were to legislate against that within a region, other regions would become more economically dominant and then undercut us on our own markets. If we put up trade barriers then it effects the balance of payments, besides, this type of technology cannot be effectively walled off.

So it's not just the case of reframing our values we need to monetise them too - but recognising that what we are aiming for are values is a good start. Now, back to what I was saying in the dying sentences of my original comment, terminating in, 'Whatever is coming it is difficult to see how we can make ourselves ready for it.'

What I meant is that exploitative technology is available and we are vulnerable at the HMI. If our approach is along the lines of fairness, resets or reframing we won't make much impact unless the value it would give us is monetised. We don't have much time for re-imagining because a technological juggernaut is headed our way from Asia. Ready or not its coming anyway. It's already happening in the renewable space and in the race for rare earths, exotic metals and certain other minerals. The problem is not just the nature of the technology but the nature of those who own it and their attitude to us.

Expand full comment
author

Nor was your comment taken as criticism :) Although if it were I'd welcome that too. We won't go anywhere if we just pat each other on the back.

Taking this argument from a market perspective, your points are well taken. And yes, as you say, "recognising that what we are aiming for are values is a good start." And that's precisely the point, the foundational principle of this entire pyramid we collectively have built. Your last line here hits it, squarely, bull's eye.

And yet, and yet—the sentiment that if we don't do it, "they" will, doesn't have to be so self-defeating—there have been a few times when a technology has posed enough of a global, collective threat that the international community did come together and agree to act in concert. Socially and economically the world might be in a different place now than it was after the two nuclear bombs were dropped, or when the ozone hole loomed, but then again the AI question is potentially more critical.

I trust the AI question represents one such world-changing moment—and based on the level of interest and activity around it from not just regulatory and legal bodies but various segments of society, I hold that trust close, and I don't subscribe to simply giving our power away. In martial arts they teach you, your will, not your muscle, determines whether you survive.

Expand full comment

I would just add that realism is the opposite of self-defeatism because then we can find solutions. In nature we have arms races between apex predators and prey and both threaten the survival of each other. That is a fascinating area that I would be tempted to go on-and-on about but it is the same for markets, trade imbalances, technology and resources. It is not that we cannot collaborate for good outcomes it is just that if one party sees an advantage it will be taken. By recognising that we can ask how we provide international regulation to support the outcomes we want - agreements are not enough and regulation must be attached to monetisation, because that's are unit of value exchange. That is why the cost of breaking a contract has to be financially punitive. It should be so for other things we value. This is why much of what the UNFCCC are trying to achieve is stymied because we have inadvertently rigged the game against the solutions we need. I can see that AI is a similar problem although my involvement with that has been peripheral at best and mostly limited to writing functional requirements and specifying the occasional genetic algorithm.

Expand full comment

Great subject.None of us working the field of AI have really thought about this.

Expand full comment
author
Jan 10·edited Jan 10Author

I know plenty of people who have, and do think about it quite a lot... but certainly it's not something you see talked about much, bc it's not as glitzy as interviewing "world-disrupting" billionaires. That's a media habit we need to kick, for sure.

Expand full comment
author
Jan 10·edited Jan 10Author

Reading this Op-Ed I'm left haunted feeling as if the impact of AI on our mental health, cultural heritage, relationships, and the human spirit may not be as the techno-optimists are claiming. What will be the price for increased productivity in an increasingly digitized version of the human heart, fading sense of belonging and corrupted sense of meaning in a disappearing world?

AI might rob us of more than it bestows including for some of us our livelihoods. I cannot help feel that there may be hidden perversity in the technological magic, a wounding element of the disruption that even Venture Capitalists from their high towers might be underestimating.

Expand full comment
author
Jan 10·edited Jan 10Author

It's ironic bc I'm also a techno optimist, technically. Just not in the same way as many of the people hyping this technology. The tech itself is mind-blowing, but the way the LLMs have been trained and populated is devastating, disrespectful, and dismissive. This is the issue at the core... conflating a brilliant technical achievement with a craven value system.

And you're right Michael... ultimately no one wins here. Not even the VCs.

Expand full comment

There is cause for hope as long as we are thinking about it.

Expand full comment