… I don’t know how many of you have been following the ouster of OpenAI’s CEO Sam Altman.
{{{ “Microsoft is in the driver’s seat, because they’ve essentially acquired all of OpenAI’s value for essentially zero. … Now, they’ve got Sam and now they’re not beholden to a 501c3,” said Struck, referencing a nonprofit organization. "What’s scary now, though, is Sam was obviously removed for a reason. He’s now going to have no limitations at Microsoft.” }}}
And if they get anymore of the existing OpenAI employees Microsoft will have managed to “buy” OpenAI for what amounts to signing bonuses.
We will see if those several hundred employees who pledged to quit if the board doesn’t resign actually jump ship. I’m sure $MSFT will find homes for them all.
Well, they’ve put $18B into OpenAI, which is at the moment teetering on having no management, no employees, and a toxic board, so I wouldn’t say ‘essentially zero’. But as an R&D cost for what could become a multibillion dollar industry I’d say it’s still a pretty good price.
But who owns the licensing rights to the IP? Someone will be in court if the employees walk away with more than just the knowledge and experience they acquired while employees.
What are the chances of the “toxic board” suing the people who jump ship for Mr Softie for violating a non-compete clause? I can’t believe the company did not include a non-compete clause in their hiring documents, as even fast food places are starting to do that. But then, how many Billions does Mr Softie have in hand to fight that issue?
Apparently MSFT. That apparently was the deal that they cut with OpenAI, in exchange for all their billions. They got a perpetual license to all of OpenAI’s IP, up to the point where AI crossed over into AGI. MSFT provided the money (and OpenAI needed lots of it), and in exchange got to use - forever - whatever OpenAI produced, as long as it was short of AGI.
I mean, OpenAI still owns that IP also. But if they don’t have the money (from MSFT) or the people (if they mostly leave to MSFT) to continue the work, they’ll have to rebuild from scratch.
Yup. In the realm of capitalist blunders of all time, this is right up there with Gary Kildale refusing to come to the door when Gates and Allen came calling looking for a deal for CP/M.
I’m a long term owner of MSFT and even I think this is too much technology control in one corporate entity. This technology needs to breathe in a diverse competitive environment to ensure enough parties have enough insight into its inner workings to create viable guardrails against it before it is dragged to the lair of one behemoth.
I think you mean Gary Kildall.
And Gates sent IBM to visit him to license CP/M and as the story goes he didn’t meet with them. But actually he did eventually met with them because you could get CP/M-86 for the first IBM PCs…it just cost more than PC-DOS which Microsoft hurriedly bought from a small developer in Seattle, called it MS-DOS and the rest is history.
After being fired by the board of OpenAI, then hiring on at Microsoft to develop AI products and services there, Sam Altman has been RE-hired by OpenAI which subsequently eliminated the board members who likely originated the original firing.
It probably didn’t help the board to find that roughly 700 of 770 employees signed a statement slamming the board for technical and managerial incompetence and threatened to leave the company, which would have left the board nothing to “board over.”
That might be the real lesson here from a macro-economic standpoint. There are certain fields of technology that advance so quickly that even BILLION dollar capital investments in hardware and branding are meaningless without the right human capital. When the human capital involved learns that, lookout executive overhead.
In today’s installment of news on AI-related intrigue, Reuters is reporting that some of OpenAI’s researchers wrote a letter to the board in the days prior to Sam Altman’s firing regarding their recent findings about capabilities of the AI tools these researches thought to be of great risk.
It might help to clarify the nature of “research” and “analysis” being conducted by employees at any of the companies developing artificial intelligence. The essence of the fear about AI technologies is that once a few key algorithms are developed BY HUMANS and added to an AI based system, the ALGORITHMS can begin identifying patterns in prior data and prior algorithms created by humans that become data to an algorithm and develop NEW algorithms that HUMANS cannot immmediately understand. Even one or two generations of such advances would put an AI system’s overall behavior beyond the understanding of humans.
The problem with this recursive feedback cycle for humans is that as the current capabilities come closer to that razor’s edge where the algorithm can devise a NEW algorithm beyond human comprehension, humans will lose any insight into the INTENT of such AI-derived algorithms. A key aspect of current AI development involves creation of monitoring tools and telemetry to “observe” what AI algorithms are doing to see if their pattern of resource utilization (CPU cycles, disk space, network I/O, etc.) suddenly change in ways that dont match prior observations. Such “square wave” changes in patterns could reflect “the algorithm” is now doing something beyond what a human might have predicted or can understand.
It SOUNDS like some of the employees at OpenAI spotted some anomolous behavior that they interpreted as signs the AI capabilities have drawn closer to that inflection point.
If you consider how computer systems and infrastructure across the world are already at risk for botnet attacks, computer viruses, etc. and how those threats are created by bad actors, the dangers of an “out of control” AI are obvious. A commercial firm selling computer security software to PROTECT systems might easily use an AI system to identify new vulnerabilities to develop preventative measures or notify software makers to fix vulnerabilities. The instructions to the AI might involve prose like this:
SHOW ME EXAMPLES OF BUFFER OVERFLOW FLAWS WRITTEN IN C, C++, Java, Kotlin, C#, Python and Rust
FIND EXAMPLES OF THESE FLAWS IN OPEN SOURCE REPOSITORIES FOR APACHE LIBRARIES AND GITHUB
FIND ME THE TOP TEN COMMERCIAL FIRMS WITH EMPLOYEES CONTRIBUTING TO OPEN SOURCE PROJECTS
FIND ME REFERENCES TO VENDORS SELLING PRODUCTS TO THESE TOP TEN FIRMS
From those inquiries, one could map bad programming flaws in the most commonly used programming languages, find examples of such bugs in commonly used open source libraries, identify firms with a track record of using open source libraries in their internally written software and identify commercial software used by those firms that might integrate to the internally developed software with known bugs.
At that point, a BAD actor could then craft specific attacks for that firm or its entire industry based on those known vulnerabilities. This already happens today, but it takes weeks and months and years for humans to make those associations. With the right data, an AI could turn that cycle within DAYS.
If a company using AI “aims” the AI not only at solving a technical problem but simultaneously at optimizing its revenue, market share, profits, etc., that AI could create an “intent” to not only DISCOVER these vulnerabilities (the original “sane” goal) but to EXPLOIT them to stimulate demand for the company’s products to maximize revenue. If the AI being used wasn’t coded with fool-proof gaurdrails to avoid those intents crossing paths and feeding upon one another, it’s very conceivable that a single AI optimizing both goals might exhibit behavior unintended and unforeseeable by humans.
This is a good summary of the intrigue at OpenAI over the past week.
Per this analysis, Altman and OpenAI are still crossing t’s on his terms of return. There is also a reference to a story in Nature regarding an AI based system built to diagnose the most deadly form of pancreatic cancer that outperforms “mean radiologist performance” at making a dignosis using non-contrast CT data.