Thanks, I am sure many of those guys are well ahead of me in this.
That said knowing where the buttons are and doing anything is different. Those guys often have little to nothing done.
I looked up a YouTube video on my first need in UE. Bingo!! One big step worked out mostly. The rest is easier.
The real thing I need is a tutorial series on basic C++. I dont need a huge expertise just enough to get around. There are some specs I will need to change. I am going to check out the UE website to see if they have tutorials.
Usually in the UE tools I can without coding. But I can not go in later to study some C++ if I need it when all else fails.
Nvidia reported its second quarter earnings after the bell Wednesday blowing away already sky high expectations for the graphics chip giant as the AI hype train keeps plowing forward. The company reported a 101% year-over-year jump in revenue while adjusted earnings per share rose 429%.
Nvidia also issued revenue guidance of $16 billion plus or minus 2% for the current quarter, eclipsing Wall Streetâs already lofty expectations of $12.5 billion.
As you may have noticed, the anticipated reversal from a year ago never happened.
NVDA stock jumped after earnings were released a year ago this week but are still up 160%. At the same time, quarterly earnings have gone up 460%, so the P/E ratio has noticeably dropped.
One H100 GPU costs $300,000. ($7.2 Billion for 24,000 units) I donât know what NVDAâs cut of that revenue is. They are only doing the software, but thatâs a significant piece of it.
Arenât H100s about $30,000 each? So probably $720M in hardware, but there is also quite a bit fo software and licenses involved, so it could be well over a billion as a package. And remember that this is only ONE customer!!!
Here is what Collette Cress had to say on the conference call. With ongoing software optimizations, we continue to improve the performance of NVIDIA AI infrastructure for serving AI models. While supply for H100 prove, we are still constrained on H200. At the same time, Blackwell is in full production. We are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell is well ahead of supply and we expect demand may exceed supply well into next year.
Here is what Jensen had to say. The Blackwell platform is in full production and forms the foundation for trillion-parameter scale generative AI. The combination of Grace CPU, Blackwell GPUs, NVLink, Quantum, Spectrum, mix and switches, high-speed interconnects and a rich ecosystem of software and partners let us expand and offer a richer and more complete solution for AI factories than previous generations.
So you guys are talking H100 and the H200 is already here and the Blackwell will be out by the end of the year. NVDA just keeps compounding and their chips just keep getting faster and faster. Everyone has to upgrade but when will they say that is good enough? It looks like it is in a frenzy right now but at some time everyone settles down and says we are good enough. I am thinking we have at least a year before that happens but it is hard to judge.
640K of Blackwell? Or maybe 640K of the chip after that? When is enough, enough.
Nvidia just made [$14 billion worth of profit in a single quarter] thanks to AI chips, and itâs hitting the gas from here on out: Nvidia will now design new chips every year instead of once every two years, according to Nvidia CEO Jensen Huang.
I agree BJ but what NVDA is doing has never been done. A new chip every year. Everyone waiting for this all to stop and for Nvda to fall, because Chips are cyclical. But NVDA says they donât see and end.