17 Comments
Nov 23, 2023Liked by Gergely Orosz

Oh I wish this was public so I could share to people without subscription :)

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Good recap. Thank you. This story shows us that mixing for-profit and non-profit is corrupt. Better way is to be for-profit and support non-profit organizations. What OpenAI did with its public goals and 'values' is almost malicious. Time will tell.

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Thanks for such a thorough analysis of the situation. You highlight issues that are glossed over in other publications (e.g. Altman not passing a sniff test). Like others, I too wish this was public so that we could share it with others.

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Thanks for this article. More speculation than you often do, I know, but good food for thought.

I found Evan’s statement from your last interview, that the company often based decisions on which option would reach AGI sooner, fairly shocking in the context of their charter. It does add weight to the idea that Altman was mostly running roughshod over it, despite some notable safety efforts.

Expand full comment
Nov 23, 2023Liked by Gergely Orosz

Of all the hyperbole that’s come out about this topic, it’s good to see some grounded analysis and reasoning of what we know. Think this one could be incredibly popular if made public!

Expand full comment
Nov 23, 2023Liked by Gergely Orosz

While true that Sam has no equity, do we know if he (like other employees) has any PPUs? My understanding is that no one but Microsoft and OpenAI (the non profit) own equity.

My guess is that the "I'm so good I didn't take any equity" is very misleading and he likely has a large PPU waiting for him. That would explain the push for rapid commercialization at any cost.

Expand full comment
Nov 25, 2023·edited Nov 25, 2023Liked by Gergely Orosz

Very informative as usual, and it's clear there are still a lot of missing information with what's going on given the number of (likely plausible) implications you had to make.

The one thing that stood out to me as I was reading this is that ultimately, we are our own worst enemy if AGI were to become a danger to humanity. OpenAI feels like another example where even people with the best intentions are not immuned from being corrupted by wealth and power, as questioned by you regarding most OpenAI staff being happy to work for Microsoft/follow Sam despite knowing they are a for-profit company.

Out of interest, do you trust that OpenAI will maintain their integrity in "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole"? Or is that just too idealistic to uphold in the long run?

Expand full comment