While Nvidia is synonymous with AI hardware, Apple Silicon is making a strong case for its role in the next phase of AI development. Apple’s M2 Ultra, with its unified memory and UltraFusion technology, offers unmatched cost efficiency for running models like DeepSeek R1. At just $26 per GB of memory compared to $312 for Nvidia’s H100, Apple’s chips may reshape the economics of AI deployment.
Rumors of the upcoming M4 Ultra only add fuel to the fire. With 256GB of unified memory and bandwidth nearing 1.2TB/s, Apple is quietly positioning itself as a formidable player in AI hardware.
Someone very close to me works in a major tech company, and yesterday she relayed to me a lot of online chatter about how Deep Seek runs very well on an M2 Mac, causing some upset in the non-Apple side of the industry. There seemed to be a lot of confusion yesterday. Apparently, it was developed on older Nvidia chips, which called into question whether AI has passed a “good enough” plateau in hardware technology, and all those questions cast doubt on the big spending in AI and helped drop stocks of companies heavily involved in AI, especially Nvidia (which is rebounding today).
Apple didn’t crash, though I personally think Apple’s benefit is less about Apple Silicon and more the idea that maybe Apple was right not to spend way too much on AI. That, and I think AAPL’s decline due to AI (of the Apple kind) and the Chinese market is already baked in. I remain skeptical about the current state of AI, both the Artificial and the Apple kind, despite – or perhaps because of – my own early forays into ML.
-awlabrador