Thanks for this piece. ChatGPT just arrived on the scene and now millions of people are scurrying to figure out what to do to leverage it or control it. There is little we know right now about the implications of having such an intelligent “helper” available and it will take time to iron out all its regulatory wrinkles. Over all though, I think it could lead to some really innovative products if controlled properly.
Thanks Michael, very interesting and relevant. Is the main problem around privacy that we might share personal details with chat GPT or that we could be profiled based on our questions and this ‘profile’ data sold and so on. Curious... maybe both?
I think it's both and then some, especially in terms of the code we share and business data that might otherwise be a bit classified. My other main fear is how these AI companies will find unique workarounds to obtaining sensitive medical health data. Using a chatbot as my therapist could have some unintended side effects.
Search engines, especially Google and Bing already collect huge amounts of personal and private information about users (location, interests, travel, purchases, health, etc.) This has been going on for years. People who use LLMs and chatbots may provide similar personal information, but that's nothing very new.
Indeed I can search something on Google or Bing, and literally go on LinkedIn and their feed is sending me posts on the same thing! Imagine for a heavy ChatGPT user? It will be way more personalized.
Very important to write about this, a lot is happening and the effects of decisions made today will be felt for years to come. Especially the business side of it is worrying, reminds me of how social media companies and data brokers have been getting away with murder in the absence of regulation.
I also recently wrote an article about the latest developments around AI.
The prevalence of people using ChatGPT in a work environment must be very high and growing. So far I haven’t seen a massive reaction from Data Security teams but this must be coming down the line very soon!
Employees need to exercise common sense with what they ask ChatGPT but they also need some guidelines and dare I say it “rules”.
Tricky development. Note the two different motivations. How much, really, is the government concerned for our privacy, and how much is it about the concern to be able to control the narrative? Some are quite clear about it. The EU might have similar concerns as China, but instead acknowledging only GDPR worries.
An exceptional article that genuinely encourages thought and reflection. At first, when I started reading it seemed as if Michael was not entirely convinced about the merit of LLMs or perhaps perceived more disadvantages than advantages. I was even inclined to argue that limiting ChatGPT would benefit tech giants like Meta, Google, and Apple, as it affords them more time to develop their own LLMs.
Nonetheless, as I continued reading, I realised that the you carried out a comprehensive analysis without merely glossing over the subject.
Michael, your insights are truly valuable, and I'm looking forward to citing some of your key findings and providing my commentary in a future issue of my "Weekly AI." If you don't mind, of course.
Remember when half the internet went down because of that cloudflare problem that took whatsapp and facebook offline? The whole structure is so unbelievably centralised, while it clearly does not have to be.
Thanks for this piece. ChatGPT just arrived on the scene and now millions of people are scurrying to figure out what to do to leverage it or control it. There is little we know right now about the implications of having such an intelligent “helper” available and it will take time to iron out all its regulatory wrinkles. Over all though, I think it could lead to some really innovative products if controlled properly.
Thanks Michael, very interesting and relevant. Is the main problem around privacy that we might share personal details with chat GPT or that we could be profiled based on our questions and this ‘profile’ data sold and so on. Curious... maybe both?
I think it's both and then some, especially in terms of the code we share and business data that might otherwise be a bit classified. My other main fear is how these AI companies will find unique workarounds to obtaining sensitive medical health data. Using a chatbot as my therapist could have some unintended side effects.
Thanks!
Search engines, especially Google and Bing already collect huge amounts of personal and private information about users (location, interests, travel, purchases, health, etc.) This has been going on for years. People who use LLMs and chatbots may provide similar personal information, but that's nothing very new.
Indeed I can search something on Google or Bing, and literally go on LinkedIn and their feed is sending me posts on the same thing! Imagine for a heavy ChatGPT user? It will be way more personalized.
Very important to write about this, a lot is happening and the effects of decisions made today will be felt for years to come. Especially the business side of it is worrying, reminds me of how social media companies and data brokers have been getting away with murder in the absence of regulation.
I also recently wrote an article about the latest developments around AI.
https://roberturbaschek.substack.com/p/breaking-chatgpt-is-a-ravenclaw Originally I wanted to just explore what it could and could not do, but so much has been happening that I had to put that off for a future article.
Thank you for the comment and link Robert. I will check it out.
If anyone wants to chime in, here's my latest A.I. Supremacy Chat: https://substack.com/chat/396235
Great article.
The prevalence of people using ChatGPT in a work environment must be very high and growing. So far I haven’t seen a massive reaction from Data Security teams but this must be coming down the line very soon!
Employees need to exercise common sense with what they ask ChatGPT but they also need some guidelines and dare I say it “rules”.
Tricky development. Note the two different motivations. How much, really, is the government concerned for our privacy, and how much is it about the concern to be able to control the narrative? Some are quite clear about it. The EU might have similar concerns as China, but instead acknowledging only GDPR worries.
An exceptional article that genuinely encourages thought and reflection. At first, when I started reading it seemed as if Michael was not entirely convinced about the merit of LLMs or perhaps perceived more disadvantages than advantages. I was even inclined to argue that limiting ChatGPT would benefit tech giants like Meta, Google, and Apple, as it affords them more time to develop their own LLMs.
Nonetheless, as I continued reading, I realised that the you carried out a comprehensive analysis without merely glossing over the subject.
Michael, your insights are truly valuable, and I'm looking forward to citing some of your key findings and providing my commentary in a future issue of my "Weekly AI." If you don't mind, of course.
“The basic way we interact with the internet is being decided by just a few companies and by just a few people who work at those companies.”
This needs to be shouted from the rooftops, 24 hours a day, until somebody listens.
Remember when half the internet went down because of that cloudflare problem that took whatsapp and facebook offline? The whole structure is so unbelievably centralised, while it clearly does not have to be.