Shake up at OpenAI...its implications for AI (and MSFT $10B+ in)

In case you have not read the latest on AI, news broke Fri afternoon that Sam Altman, CEO and cofounder of OpenAI, was fired by the board. His exit was a total shocking surprise to everyone, including Microsoft, who has invested more than $10B in and owns a 49% share of the company. Sam’s departure was quickly followed by the resignations of several OpenAI technology leaders and AI researchers. Over the next few days, we will learn more about the what, why, who, when, how of this event.

As an AI investor, I am more interested in where we should invest going forward. I am more interested in the business and product implications of these changes.

  • MSFT has designed all their AI products on OpenAI’s ChatGPT. With the departure of key technical staff at OpenAI, is the future development roadmap of ChatGPT and by extension, MSFT’s AI trajectory in jeopardy?
  • Where will Sam et al end up and who will they partner with? Likely, they will create a new startup. Recently, Sam has been flying around the globe trying to raise about $90B in venture funds. He now has a rich rolodex of investors who will be ready to put money to work on anything that he initiates. The question is which large company now becomes their premium partner? GOOG? META? NVDA? TSLA? An old-world large cap looking to revitalize their growth like CSCO or IBM? Maybe even MSFT if there already is a certain level of trust between Sam and Satya.
  • In terms of cloud implications, MSFT Azure has been on a tear due to the rollout of their AI cloud services and ChapGPT based Copilot products. Will interest in MSFT’s AI offerings wane? …now that the underlying OpenAI product roadmap is at risk?

Very interesting time to be an investor!


You wonder what kind of non-compete is in place and whether its enforceable. That alone might find him working some place exotic like the Bahamas.


Too early to tell what they might work on next. But these are smart guys, arguably the best in AI. I am sure the corporate lawyers will find ways of guiding them through any non-competes if they even exist. Possible that the MSFT $10B investment came with some special clauses.

The short term cloud is mostly on MSFT and their AI trajectory. Most of what they are working on the product side is OpenAI based. If the cream of that team starts exiting, it adds risk to their future roadmap and customers will think twice before making longer term commitments.

Too early to tell…let the analysis paralysis begin!


This is fascinating, and I am totally unqualified to comment from a tech perspective. But, considering that every ER, from a very wide swath of industries, has spent significant time on AI initiatives and applications. Stock prices moving bigly based on claimed or potential AI applications and integrations. Money, baby!
Plus, international agreements curtailing AI applications. This looks like any one of dozens of movies that you good stuff thread guys could reference.

Here is, perhaps, the premier batch of AI execs and their cohorts and mere billions of capital (is it unrealistic to assume there might be $100 billion out there competing). Governments, the largest corporations in existence, rogue billionaires. Was there some sort of double cross going on? Lots of ego, one can be sure. The previous posts state that they were out and about gathering capital. That would certainly trigger confrontation and termination. Am I seeing any of these things wrong? All of it?


Once AI was a very limited field. At this point Microsoft probably has second and third tier people very capable of carrying on the program. And glad to have promotion opportunities.

They may not have the creative vision of the first tier but they may surprise you. Lots of energy. Out to prove they can do the job.

It didn’t take long for Sam to land a job…doc

OpenAI appoints new boss as Sam Altman joins Microsoft in Silicon Valley twist | Reuters

In motion, the future is. Hard to see, the future is…

A few random thoughts, and none of it likely to hold true in the future, as YodaDreamer already pointed out:

  1. I got this vibe around Sam Altman/OpenAI and Microsoft that reminds me of Lucky Palmer/Oculus and then-Facebook. Microsoft will use up Altman, try to control him, it won’t work out that well, and probably will break up sooner rather than later.

  2. Is AI real? Yes. Is it obvious who to invest in? Not really. NVDA benefits for now, and a few more years at least, I would think, but markets and mega-tech competitors aren’t going to just play dead and pay up whatever NVDA wants for their GPUs. Competition is coming fast, is my guess.

  3. Similar to #2, AI reminds me of the advent of internet company websites. What I mean is that in an earlier career, circa 1999 to early 2000s, many smaller and even some larger businesses either didn’t have websites, or had mediocre “websites” that were basically a Contact Us page and nothing else. Today, most websites are actionable and meant to be a profit-center for every business, whether B2C and B2B or just more in-depth on info on the company. Basically, just about every business has a website. It has been commoditized. I truly believe AI will get to this point. We will all have personal AI assistants in some form, families may have overarching family AI’s that manage your kids’ AI’s, every business will have AI at point-of-sale, integrated into security video cams, on their websites, tied to shipping/tracking, analyzing supply chains for optimization and efficiency, baked into how grocery stores and costco line up their products in which aisles and which products, etc etc… It will permeate everything in the way that websites/digital permeate everything today. If you see something and can imagine it has a digital footprint, then just imagine it can have an AI footprint too. Someone (probably ironically an AI) will crack code on latency and minimal hardware and software and cellular/internet requirements, diminishing NVDA dominance a bit and spreading AI into every nook/cranny with IoT.

  4. Related to #3, as AI is commoditized, it becomes just another cost of doing business, like insurance and having a website and digital presence. Only I think it could wind up being much more expensive, due to a whole regulatory framework that probably gets imposed worldwide and at different levels in many countries (like Europe with privacy laws) and that leads to a whole industry of folks that just exist to help with compliance and auditing, both internal and external to companies. On top of that, cybersecurity (which is already a form of “insurance” companies need to have these days) will morph a bit into an AI-focused realm. Folks using AI for evil intent, folks using AI to combat the evil AI. Hilarity, and cost, ensues.

  5. “But Dreamer, AI will change the world!” Ok, yes. Yes it will. But if your prediction comes true, I am not sure you will like the consequences, which could be, among other things: massive unemployment, industry shifts for where jobs are needed, and no end to potentially devastating hacking/misinformation when many wake up one day to see their bank accounts have been looted or their identity stolen or impossible-to-verify AI-generated document trails or videos are created…to the point where it is like the world operates like the inside of Trump’s brain and the truth truly is what you decide it is, because there is no single source of truth. We haven’t even touched on what AI means in war and military usage, let alone basic citizen privacy.

  6. So while you may have some companies benefit from AI in terms of stock price in the next couple years, and likely the biggest AI company on the planet is a startup idea at the moment somewhere, I believe the costs of doing business with AI will be much higher than most realize, and what we make up for in efficiency we counterbalance with increased capex and increased unemployment.

I am generally optimistic on technology and the future. I think things will all work out ok in the end, but I think we should all expect some really bad decisions and wrong turns along the way.



A really bad decision might be the last one for many people.
Because if I look around, I realize that in the last few years it has already happened several times.

Could AI use eliminate all the toxicity that is spreading faster and faster on Planet Earth?