Leadership Churn at OpenAI

News stories came out on November 17, 2023 regarding the abrupt firing of Sam Altman as CEO of OpenAI. He was invited to a Zoom call at noon to meet with the board and immediately fired. Within about 20 minutes, a press release was issued saying words to the effect that the board had been reviewing actions over some period and concluded he had not been honest in information he had provided and had lost their confidence in being able to lead the company.

The conflict appears more strategic in nature than anything else because the board also removed another exec from the board while stating he was still staying with the company in his vital role. Apparently, they didn’t review that statement with him because he immediately resigned as well. Subsequent to that, two or three other key engineers quit.

The churn at OpenAI may have significant investing and regulatory impacts because it seems there were conflicts over making rapid progress with AI capabilities versus making progress establishing safeguards around AI output and learning / feedback loops of the core technology. I don’t follow the technical and exec alignments close enough to comment on the validity of the rumor mills. One interpretation is that Sam Altman might have been more interested in progressing raw capabilities and determining ways to commercialize / monetize the technology while the board may have been more cautious and interested in ensuring guardrail technology remained a top priority.

From a talent pool standpoint, it means that at least four or five keystone figures in the core engineering and / or business marketing aspects of AI are adrift between firms. They might hire on at one of the other gravitational hubs in the space or start other competing firms. I think I’d prefer each launch a new startup. The more fragmentation there is in this sector, the longer it will take to coalesce, giving the public and government a chance to formulate appropriate regulations.



It is interesting the attitudes at the firm. The new CEO has this air of sophistication.

Normally of course you would want sophisticated people in charge but the airs tell a different story.

Rubbing with greatness. Yes the algorithms are complex. So yes any executive with OpenAI is constantly talking shop in complex terms.

The problem is the attitude reminds me of petulant college students who protested Vietnam in complex social and economic terms. Nothing wrong with that.

How can we have people in the dark ages of Vietnam have the same attitudes of intellectual superiority as at the latest and greatest AI company?

One problem is who writes the script.

Give you a for instance. A truly nice woman in my art circles believes she owns her AI work. That it is her work. The USCO says it is not copyrightable because a human being did not create it.

Did Altman or his replacement write the computer code? No or not much of it.

The issue is copyright is now becoming an issue.

Not what you expect as an issue. Writers are suing to protect their IP. This creates a class that OpenAI can negotiate with…bigger problem than you can image.

This is going to actually take much more complex coding to sort out than the company has created so far.

The reason in a word, “noise”. I am not 100% sure how noise is used in writing but in art that is how AI works. It runs cycles of noise over the subjects until something can be pulled out of the noise.

There is a huge disconnect between the final product and the infringement. Fishing that out is a nightmare.

The attitude is the same but the idea that the problem can be solved is drifting away.

If OpenAI is found to have infringed but can not negotiate royalties for the class then the liabilities will crush the company.

At first, we might say 10 writings were used so the royalty should be split ten ways. Then we might say what percentage of each was used. Now we are lost. Worse nothing is 10 writings. It can be 1000 or 20000 writings were used. Who knows how many samples?


Hmmm, makes me wonder. What could go wrong?


The Atlantic posted an update on the leadership intrigue at OpenAI based on interviewes with employees over the weekend. The story can be found here

The nutshell version… OpenAI was created with a corporate structure aimed at operating the company more as a software research lab than an actual software services company hoping to grow exponentially. Key players on the board intend on sticking with that goal but felt Sam Altman was pushing the commercialization too quickly. Conversations with employees cited in the story referenced multiple cases where engineers were identifying shortcomings in the platform’s instrumentation that prevented them from learning how the tool could be / was being misused to add filters to block such abuses. At the same time, Altman was pushing new feature releases and physical work to keep scaling to meet growth.

The story ends with this ominous (and correct) note.

Even with Altman out, this tumultuous weekend showed just how few people have a say in the progression of what might be the most consequential technology of our age. AI’s future is being determined by an ideological fight between wealthy techno-optimists, zealous doomers, and multibillion-dollar companies. The fate of OpenAI might hang in the balance, but the company’s conceit—the openness it is named after—showed its limits. The future, it seems, will be decided behind closed doors.