ChatGPT is Getting Banned, on Privacy Watchlists and Mining Corporate Trade Secrets
All is not well in Generative A.I. Utopia
Hey Everyone,
ChatGPT changed the world in just a few months late last year.
Within two months of its release it reached 100 million active users, making it the fastest-growing consumer application ever launched.
At the risk of sounding “negative”, I’m going to go out on a limb here and say privacy still matters, even in the hurricane of hype that is Generative A.I.
I’m starting to get a bit concerned with how the internet is evolving with the West trying to ban TikTok (likely also includes clones like Lemon8) and how more authoritarian countries are banning ChatGPT. However banning ChatGPT is actually good, it’s mining everything.
Recently Italy’s privacy watchdog banned ChatGPT over data breach concerns. Regulators in Canada are concerned about the privacy abuses. Meanwhile, as a whole we are disclosing too much information (both sensitive health data) and even secret corporate trade secrets like source code.
ChatGPT in mental health looks like a data privacy nightmare for sensitive health data. Clearly OpenAI and Microsoft are interested in this kind of data. Microsoft itself has many products concerned with Health technology. They acquired Nuance for nearly $20 Billion, about two years ago.
Even Casey Newton’s Platformer (locked) recently went into the impact of Regulators on OpenAI and ChatGPT. It seems we have to rely on Europe for even basic ethics online and human rights. Now in mid April, the EU privacy watchdog has set up ChatGPT task force.
The Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft. We already knew ChatGPT has a privacy problem, but what happens when the lawsuits and regulators really bear down on just how nefarious the data mining in this tool likely are?
If this Email gets cut off, read it on the Web instead. (Gmail will truncate messages exceeding 102KB).
⭐ A WORD FROM OUR SPONSOR ⭐
Outsource card data management. All the benefits, none of the headaches.
Collect card data, send it to processors or partners, and store it as if it's in your database while satisfying up to 95% of the compliance requirements that come with PCI.
Let’s get back to the story:
Canada’s federal watchdog launched an investigation into OpenAI following a complaint lodged over privacy concerns. I’m sure there are many complaints that haven’t been reported or don’t reach the media.
"AI technology and its effects on privacy is a priority for my Office," the country's privacy commissioner Philippe Dufresne declared in a statement this week. "We need to keep up with – and stay ahead of – fast-moving technological advances, and that is one of my key focus areas as commissioner."
When technology companies move quickly on behalf of their 😈 corporate overlords what stress might it create in society that endangers us all? We haven’t been taking privacy or human rights online for the better part of the last decade, and we’ve let corporate BigTech monopolies basically go unchecked. So they are becoming even more bold.
Is Microsoft a Bad Actor in 2023?
When Microsoft is acquiring Activision and basically funded OpenAI with over $13 Billion, I knew Microsoft had stepped into territory which wasn’t going to be healthy for society or the future of technology.
Canadian Privacy commissioner Phillipe Dufresne announced April 4 his office would be investigating the Microsoft-funded company after a complaint relating to the collection, use and disclosure of personal information without consent. Meanwhile employees at Samsung really were’’t thinking straight.
👁️🗨️ Samsung employees accidentally shared confidential information while using ChatGPT for help at work. Samsung's semiconductor division has allowed engineers to use ChatGPT to check source code. What were they thinking? This isn’t disappearing Snaps guys. This is the secretive OpenAI ethics we are talking about, basically vassals to Microsoft.
If people at Samsung are doing this, people at all sorts of companies are doing it. But The Economist Korea reported(Opens in a new tab) three separate instances of Samsung employees unintentionally leaking sensitive information to ChatGPT. In one instance, an employee pasted confidential source code into the chat to check for errors. Another employee shared code with ChatGPT and "requested code optimization." A third, shared a recording of a meeting to convert into notes for a presentation. That information is now out in the wild for ChatGPT to feed on.
For OpenAI and Microsoft you understand, all this data is a data trove that could become very valuable without regulations, controls and rule of law.
The Generative A.I. wild-wild west is something I’m watching pretty closely. Because it impacts how companies are able to compete with companies that integrate Generative A.I. deeply embedded into their products.
Given that LLMs are getting much larger and have more efficacy in certain tasks, it’s a bit of an A.I. arms race. Corners are being cut, laws are being broken and regulators are flat footed just like we saw with the banking sector and mistrust and bank runs in regional banks recently.
Silicon Valley is a walled garden for future capabilities of A.I. and this means how OpenAI and Anthropic A.I. innovate is going to be very centralized around companies like Microsoft and Google. For example the turf war over Google’s monopoly on Search Advertising. The Biden Administration said it’s looking into AI regulation as well, though I have little faith in the U.S. on this level.
Google will Enable Anthropic A.I. and Focus Gemini to take on ChatGPT
As per TechCrunch, Anthropic aims to raise as much as $5 billion over the next two years to take on rival OpenAI and enter over a dozen major industries, according to company documents obtained by TechCrunch.
A pitch deck for Anthropic’s Series C fundraising round discloses these and other long-term goals for the company, which was founded in 2020 by former OpenAI researchers.
In the deck, Anthropic says that it plans to build a “frontier model” — tentatively called “Claude-Next” — 10 times more capable than today’s most powerful AI, but that this will require a billion dollars in spending over the next 18 months.
But can we please have some ground rules for LLMs, Conversational A.I. and AGI before it’s too late? Gemini at Google refers to DeepMind and Google brain working together to improve Bard and possibly Sparrow.
The Italian Data Protection Authority described the move as a temporary measure “until ChatGPT respects privacy”. The watchdog said it was imposing an “immediate temporary limitation on the processing of Italian users’ data” by ChatGPT’s owner, the San Francisco-based OpenAI. The problem is, it’s not alone.
Authoritarian Pro Censorship Regimes Thwart Global Mass Adoption of ChatGPT
But are we entering an age of A.I. censorship and fact-verification wild-wild west as well?
But even as TikTok is likely to get banned in the U.S., difficult as it may seem, China and others are already blocking ChatGPT.
China’s regulation around ChatGPT like entities has to have Chinese authorities verify it, but how are they supposed to do that? I’m not sure they can understand what LLMs will become. China’s cyberspace agency wants to ensure AI will not attempt to ‘undermine national unity’ or ‘split the country’.
From Wikikiki:
Why Has ChatGPT Been Banned in Multiple Countries?
Italy:
The Italian Data Protection Watchdog, Garante, has banned ChatGPT citing privacy concerns. Garante ordered OpenAI to stop processing Italian users’ data during an investigation into a data breach that allowed users to see others’ chatbot conversation titles. The organization also expressed concerns about ChatGPT’s lack of age restrictions and its ability to provide inaccurate information in its responses.
China:
China has concerns that the US could use AI platforms like ChatGPT to spread misinformation and influence global narratives. Due to its strict rules against foreign websites and applications, and the current low point in relations between China and the United States, China has banned ChatGPT and it is unlikely that it would allow other platforms similar to ChatGPT to operate across its borders.
Russia:
Moscow is also concerned about the potential for misuse of AI generative platforms like ChatGPT. Additionally, given the current indirect conflict with Western countries, Russia is also not risking allowing a platform like ChatGPT to influence narratives within the country.
Iran:
Iran is known for its strict censorship regulations and the government also strictly monitors and filters internet traffic, restricting access to many websites and services. Additionally, relations between Iran and the US have deteriorated since the Trump administration withdrew from the nuclear pact. And following all the political stress, the AI chatbot from the US is not available in Iran.
North Korea:
In North Korea, the government of Kim Jong-un has heavily restricted internet usage and closely monitors the online activity of its citizens. Given this level of authoritative control, it is not surprising that the North Korean government has banned the use of ChatGPT.
Cuba:
In Cuba too, internet access is limited and strictly controlled by the government. Many websites are blocked and not accessible to the public including OpenAI’s artificial intelligence-backed chatbot ChatGPT.
Syria:
In Syria, a country in the Middle East with strict internet censorship laws, the government heavily monitors and filters internet traffic. This prevents users from accessing various websites and services. For the same reason, ChatGPT, the AI platform developed by a US-based company, is also not available.
It appears that LLMs are being welcomed with mixed results. A Conversational A.I. may be biased in ways we don’t even clearly understand, with the ability to hallucinate information and influence us in ways that may not turn out well.
A six month moratorium on AI development doesn’t sound unsafe, but critics argue it could help China catch up. Critics like Bill Gates perhaps and other Google affectionados? We can always count elder statesmen of Silicon Valley like former CEOs to help guide our moral compass. Predictable.
Bill Gates has a huge vested interest in Microsoft’s bid for A.I. Supremacy.
The Internet isn’t Safe & LLMs are as Harmful as they are Helpful
For even Europe, this goes far beyond GDPR and other privacy laws though, A.I. is reaching a point where even the future threats are known unknowns. We don’t even have a mechanism in place to identify the risks to our human rights that an internet with embedded LLMs everywhere will do to us.
I’m pretty sure making us more productive won’t be the only behavior modification as we even change our search behaviors. The basic way we interact with the internet is being decided by just a few companies and by just a few people who work at those companies. That can’t be healthy, or coming from a place of trust or safety.
We should be on our guard. Mental health apps will conduct experiments with ChatGPT like products on people. In the name of ChatGPT, crimes are and will be committed. Nobody is policing the internet, certainly least of all the companies that stand the most to gain.
Former Google CEO Eric Schmidt dismissed calls to pause the development of advanced artificial intelligence systems over safety fears – arguing a delay would only hand an advantage to China.
Microsoft founder, Bill Gates, recently spoke out against the initiative, arguing that he doesn't think the proposed pause will "solve the challenges." Furthermore, Gates voiced skepticism about not only what the AI pause would solve, but also how it would be enforced.
Microsoft and Google have no intention of letting startups like OpenAI or AnthropicAI become independent or evolve into threats to their own corporate empires. So they will just buy them outright, if they can. Privacy and fair competition be damned.
Startups like Anthropic AI and dozens of others related to OpenAI have no choice but to buddy up with Cloud leaders and huge Ad players like Amazon, Meta, Apple, Microsoft or Google. Certainly we can expect Meta and Apple to play catch up, since they need to stay competitive. This means OpenAI and Microsoft’s plight as a first-mover is just the initial catalyst to scale and commercial even larger LLMs where in 3.5 years we’ve gone from 1.5 billion parametres to 1 trillion + parametres.
I noticed with Amazon Bedrock, even AWS is trying to seem like some agnostic place where you can access AI-as-a-Service foundational models. Only a matter of time before Jurassic-2 by AI21 (Israeli) Labs and Aleph Alpha’s (Germany) tech will have to pick sides in the Cloud as well.
So who makes sure our human rights are protected in the changing of the guard that LLMs represent? This is way bigger than just the future of privacy and sensitive data-mining and using A.I. for academic cheating and phishing and fraud.
Without rule of law and A.I. alignment, this won’t end well.
If you enjoyed this read, you can highlight a part you liked and “Restack” it on Notes. We need to debate Generative A.I. with more caution, concern and conscientiousness.
“The basic way we interact with the internet is being decided by just a few companies and by just a few people who work at those companies.”
This needs to be shouted from the rooftops, 24 hours a day, until somebody listens.
Thanks Michael, very interesting and relevant. Is the main problem around privacy that we might share personal details with chat GPT or that we could be profiled based on our questions and this ‘profile’ data sold and so on. Curious... maybe both?