17 Comments
Nov 23, 2023Liked by Gergely Orosz

Oh I wish this was public so I could share to people without subscription :)

Expand full comment
author

Jakub: several people, including you asked for this, and I've made the post public, as a one-off. Feel free to share!

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Amazing, thank you!

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Good recap. Thank you. This story shows us that mixing for-profit and non-profit is corrupt. Better way is to be for-profit and support non-profit organizations. What OpenAI did with its public goals and 'values' is almost malicious. Time will tell.

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Thanks for such a thorough analysis of the situation. You highlight issues that are glossed over in other publications (e.g. Altman not passing a sniff test). Like others, I too wish this was public so that we could share it with others.

Expand full comment
author

Philip: made it public, as a one-off!

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Amazing! Thanks!

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

Thanks for this article. More speculation than you often do, I know, but good food for thought.

I found Evan’s statement from your last interview, that the company often based decisions on which option would reach AGI sooner, fairly shocking in the context of their charter. It does add weight to the idea that Altman was mostly running roughshod over it, despite some notable safety efforts.

Expand full comment
Nov 24, 2023Liked by Gergely Orosz

For myself, the more I’ve thought and read about it, the less confidence I have that safe, controlled AGI is even possible — there will always be too many people who want to deploy it in no-holds-barred mode toward one end or another, and AI takeover seems a not-unlikely result. I know many disagree, but none of the arguments that it will remain safe strike me as particularly strong.

From this episode, it seems like self-regulation isn’t much of a solution. As Upton Sinclair said, it’s hard to understand something when your salary depends on not understanding it.

Expand full comment
Nov 23, 2023Liked by Gergely Orosz

Of all the hyperbole that’s come out about this topic, it’s good to see some grounded analysis and reasoning of what we know. Think this one could be incredibly popular if made public!

Expand full comment
author

Andrew: thank you. Just made it public as a one-off.

Expand full comment
Nov 23, 2023Liked by Gergely Orosz

While true that Sam has no equity, do we know if he (like other employees) has any PPUs? My understanding is that no one but Microsoft and OpenAI (the non profit) own equity.

My guess is that the "I'm so good I didn't take any equity" is very misleading and he likely has a large PPU waiting for him. That would explain the push for rapid commercialization at any cost.

Expand full comment
author

Karan: what we know is what Sam said in congress. It would be strange if he had no PPUs. But this is exactly what I mean when I say something is off. Why do we have to second guess all of this?

Expand full comment
Nov 23, 2023Liked by Gergely Orosz

I think Sam may loose a lot of credibility if in fact he has PPUs and is ‘hiding’ that while publicly asserting that he only does the job because he loves it and makes no money. That’s if the public ever finds out. I’m sure a lot of investigative journalists are all over Sam Altman’s story.

Expand full comment
Nov 25, 2023·edited Nov 25, 2023Liked by Gergely Orosz

Very informative as usual, and it's clear there are still a lot of missing information with what's going on given the number of (likely plausible) implications you had to make.

The one thing that stood out to me as I was reading this is that ultimately, we are our own worst enemy if AGI were to become a danger to humanity. OpenAI feels like another example where even people with the best intentions are not immuned from being corrupted by wealth and power, as questioned by you regarding most OpenAI staff being happy to work for Microsoft/follow Sam despite knowing they are a for-profit company.

Out of interest, do you trust that OpenAI will maintain their integrity in "advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole"? Or is that just too idealistic to uphold in the long run?

Expand full comment
author

Darren: that statement feels very vague to me, and so different people will interpret it differently. We’ve seen how such different interpretation leads to conflict.

The problem OpenAI has is to stay a leader they need to move faster than their competition. Moving fast doesn’t allow to carefully consider if the best interest is to slow down. And so if at any point “the benefit to humanity as a whole” would be to move slower: this self-regulation is unlikely to come from within. It is far more likely to come from regulation applying to all players (that OpenAI is lobbying for, btw).

I’ve are back to the importance of regulators in setting rules and constraints to benefit society (and this a subset of humanity that they govern).

Expand full comment
Nov 25, 2023·edited Nov 25, 2023Liked by Gergely Orosz

The need to move fast to remain as the leader in the field shows that it's very challenging to remain completely "free from financial obligations" as OpenAI put it in their original introduction.

My concern is that if we are leaving it to the hands of regulators/lawmakers, who seem more risk-aversed as they don't fully understand AI and sees it more as a threat to humanity, might create regulations that stifle innovation/progress rather than making things safer for us. Do you see a path where a better balance can be achieved? I don't expect you to have the answers but would be interested to hear your thoughts or perhaps in another post if you have a longer response!

Expand full comment