Lisa Su interview

OW THAT SU has renewed and energized AMD, she’s focused on ensuring its future in a highly competitive market. While she diligently rebuilt its business, Nvidia cofounder and CEO Jensen Huang was hard at work making his company the go-to vendor for artificial intelligence computing power.

Huang, who is a distant relative of Su’s, sees a gold mine in selling the chips to buttress AI tools like Chat­GPT. Demand has already catapulted Nvidia’s share price to near all-time highs with a forward P/E of around 64x—nearly double AMD’s. “It’s why investors are looking at AMD: because they want the poor man’s Nvidia,” says Stacy Rasgon, an analyst at Bernstein. “Maybe the market is so big they don’t need to be competitive.”

Su is in a good position to take a run at the AI chip market. But she knows well how quickly turnarounds can become downfalls.

But Su intends it to be. And she hopes to take on Nvidia’s AI-centric H100 GPUs by betting on annual chip upgrades meant to burnish AMD’s position. Under her leadership, R&D spending has risen nearly fourfold, to $5 billion—almost as much as AMD’s entire revenue when she took over.

A new supercomputer at Tennessee’s Oak Ridge National Laboratory—the fastest in the world when completed in 2022—is Su’s passion project. The groundbreaking machine was built to have the processing power of at least a quintillion calculations per second and is a showcase for AMD’s AI chips. She’s throwing a curveball as well: The MI300 chip, which fuses CPUs with GPUs in a bid to counter Nvidia’s new superchip, will ship later this year.

She has also been maneuvering against Nvidia with acquisitions, such as her $48.8 billion takeover in 2022 of Xilinx, a company that makes programmable processors that help speed up tasks like video compression. As part of the deal, Victor Peng, Xilinx’s former CEO, became AMD’s president and leader of AI strategy.

Beyond Nvidia lurk other emerging threats: Some of AMD’s customers have begun doing chip development of their own—a move designed to mitigate their dependence on the semiconductor giants. Amazon, for example, designed a server chip in 2018 for its AWS business. Google has spent nearly a decade developing its own AI chips, dubbed Tensor Processing Units, to help “read” the names of the signs captured by its roving Street View cameras and provide the horsepower behind the company’s Bard chatbot. Even Meta has plans to build its own AI hardware.

Su shrugs off concerns that her customers could someday be competitors. “It’s natural,” she says, for companies to want to build their own components as they look for efficiencies in their operations. But she thinks they can do only so much without the technical expertise AMD has built over the decades. “I think it’s unlikely that any of our customers are going to replicate that entire ecosystem.”

Su is in a good position to take a run at the AI chip market. But she knows well how quickly turnarounds can become downfalls. There’s more work to be done to ensure AMD endures: “I think there’s another phase for AMD. We had to prove that we were a good company. I think we’ve done that. Proving, again, that you’re great, and that you have a lasting legacy of what you’re contributing to the world, those are interesting problems for me.”


A big part of Su’s strategy was inking new deals with the tech giants, which needed oodles of CPUs to power their exploding cloud businesses. “For us, there are really three microprocessor partners. We have Nvidia, Intel, AMD,” says Thomas Kurian, CEO of Google Cloud. “AMD, when I joined, was not really a significant part of our ecosystem—at all. And it is a credit to Lisa that they are a very important partner for us now.”
Great stuff…doc