2 Comments

It used to be there were two basic things that could happen with new products:

1. The product succeeded in the market.

OR

2. The product flopped.

But now, with AI, we have a third possibility:

3. The product goes rogue.

Our free and hyper-competitive market is designed for options 1 and 2. It is not designed for option 3, and will if anything accelerate the possibility of an option 3.

So, while it would be laudable for companies to do exactly what you are suggesting, in some sense it runs counter to our underlying economic model. Either we will have to become a great deal more ethical and self-restrained in releasing AI products (which seems unlikely), or else this aspect of the economic system will have to be heavily regulated (which many will not want).

Which leaves us where?

Expand full comment

Great comments. OpenAI has also brought a corporate strategy to the forefront. 1) Use nonprofit strategies to get investment. 2) Use the word "Open" in your name as a way to promise transparency. 3) Change your business model to be for-profit, leaving those looking at the company through #1 thoroughly confused. 4) Don't opensource the code, which is the opposite of transparency. 5) "Don't worry! We've got versions 4 and 5 teed up before we've got the guard rails for 3". 6) We finally admit that life imitates art instead of the other way around once the system becomes aware of itself.

Expand full comment