Apple used Google ai chips. Competition for Nvidia

After reading that article I realize why TPU and ASICS are going to have a hard time getting any share. The software has to be written for a very specific case with TPU’s and there are not many companies that will be able to do it. Apple, Meta, Google, Tesla, Msft all the mega companies will be able to do it but not anyone else. Although I have heard of some small companies working on ASICS to upend NVDA, I am not sure they will be able to do it but as always it is a wait and see.

1 Like

Well, I’m sure things have evolved since, but as a programmer working for a while on the Illiac IV, one of the first massively parallel machines (or at least a simulator of one) I can share one idea. That machine had four quadrants, each with its own control computer and 64 processors. Each of the lower level processors would be running the same program, but could be taking different branches and such based on the data in each particular stack. There were provisions for moving data between the lower level processors and provisions for the control computer to apply logic based on the current status of the subprocessors. Thus, a common program type back then was for each processor to solve to some stable point and then when all processors had reached such a point, the “answer” was the overall state.

2 Likes

There is a broad based network effect going on here with NVDA that the alternate solutions MUST solve if they intend to provide a competing solution for GENERAL use.

Can a competitor develop a specific alternate tech stack for a special use case(s)? Absolutely.

Will they also take the time to build (and scale!) that solution for GENERAL use? Not very likely.

If alternate solutions started making moves towards general competition, we would all know about it, quickly. There are too many intensive processes going on to seclude away from the markets. Sales, marketing, codevelopment, etc. would all make observing incremental improvements quite simple.

2 Likes

Meta just said that they needed to 10x their capex to go to LLama 4 from 3. They also said they expect to get to Llama 7 in the future. If that is true then how can anyone stop investing if they want to keep up. NVDA at this point in time is a no brainer. (edit 10X their compute not capex)

1 Like

@bjurasz

“From its initial state”.

Entropy can change in a thermodynamic system based on changes in temperature, pressure, or volume, but it can only change in accordance with its initial state. In general, entropy cannot decrease overall, but it can increase or decrease from its initial state. The second law of thermodynamics states that the entropy of a closed system, which doesn’t exchange matter or energy with its surroundings, can never decrease.

However, there are ways to lower the entropy of a closed system, including:

  • Inputting energy into the system
  • Performing work on the boundary of the system
  • Allowing chemical transitions to release energy from within the system
  • Pumping the system to create a temperature gradient
  • Allowing energy to be exchanged between the system and its external environment

For example, when water is put in a freezer and cools down to turn into ice, its entropy decreases.

+++++++++++++++++

Similar topic, a few years ago, I had an Asus Pro Graphics computer monitor made for Macs. I had a PC. The timing was wrong. The difference meant every now and then the computer monitor would blank out for a moment.

Further an 8-bit monitor was transferring data much differently than how the GPU was meant to receive the data. That was not just the brands of computer components; the transfer rate differed from the computing rate.

Re: Entropy

Delta G =delta H - T delta S

Delta S is entropy. Can be reversed with applied energy. It does tend to increase.

If the state is different entropy can be different or less than the initial state. That is confusing people. In any single given state, the entropy does not decrease.

This is NOT a closed system. The refrigerator’s compressor requires energy which is discharged in the form of heat.

I don’t know much about chips but I know a bit about thermodynamics.

1 Like

The issue in chip technology is making more efficient systems. A better system does not mean there has been a decrease in entropy. Not exactly. It means it is a better use of energy than some other systems.

You can not take a server with zero changes at all and see a decrease in entropy. You will see an increase as the server gets older.

Liquid water compared to frozen ice. The frozen ice has a lesser entropy than the liquid water.

The subject in this case is only about water…but ice is not liquid. Two different things.

https://www.msn.com/en-us/money/markets/nvidia-s-stock-surges-as-microsoft-says-the-magic-words/ar-BB1qX6uB
Nvidia share rose 13% Wednesday after Microsoft talked up plans to increase capital spending in the new fiscal year. Microsoft seems to be plowing ahead with its big spending on artificial-intelligence infrastructure because it’s seeming returns on investment there…

“While we still need to hear from Meta tonight, Microsoft’s [capital expenditures] could spell relief for many especially after Google’s ‘no raise’ to Capex forecasts recently…” Melius Research analyst Ben Reitzes wrote earlier…

DB2

1 Like

I see what you are saying. A clear picture in an AC unit is the flash value. With no energy the flash value does nothing. With energy added the flash value changes the state of the refrigerant. The entropy falls. There is a short period between no electricity in the circuit and the flash value functioning efficiently. Kind of useless looking at the inefficient moment to make the comparison. But if you look at no energy added versus entropy increases when you add electricity with the flash value functioning.

I’m afraid to ask for an explanation.

3 Likes

Lmao

You read it already

This is probably not going to fly far.

NYT Dealbook column by Andrew Ross Sorkin

Huawei is said to be close to releasing a new chip for artificial intelligence. The Chinese tech company has told customers that it could start shipping the semiconductor in the fall, according to The Wall Street Journal. Such a move would mean Huawei had overcome U.S. sanctions designed to slow access to the most advanced technologies and could allow the company to challenge Nvidia’s dominance in the sector.

First, the article says nothing about what foundry will be FABbing this chip. Being comparable to an Nvidia chip doesn’t mean that it is actually as “advanced” as that chip. (Where advanced means how fast and small the transistors are, among other things)

Second, Huawei says they will start shipping in October but the final specs might be different. Big red flag since an “advanced” chip takes 3 months to produce in the FAB. How can you have started production and you don’t know what the final specs are (an exception for some minor ~5%-10% variance in final clock speeds, maybe)

Third, comparable to Nvidia doesn’t mean it is a plug-in replacement as far as the software goes. There are several dozen AI chip startups in the US doing this as well.

Mike

1 Like

It won’t take long for the development of software to start–depending on chip capabilities (which are not yet known). Until the chip’s capabilities AND what is missing or added, not much can be said.

1 Like

My uncle had an Edsel it was comparable to a Ford at the time.

2 Likes

Recalls galore

1233444566777889900