WSJ today reports that TSMC is heavily dependent on memory chips which seem to be in decline. They think dip will be short lived.
They expect AI chips to grow strongly. 50% per year for the next five years. I calculate an estimate of 7.6 fold from that 6% (1.5^5) or to 45.6% of current revenue.
They are the producer of AI chips for Nvidia. That implies nice growth for Nvidia too.
What is meant be an AI chip?
Nividia is the leading seller of microprocessors for AI. They design them but are made for them by Taiwan Semiconductor. They are the most profitable company in the AI business. The only one making profits from AI. Can sell all they can make.
Their chips were developed originally for graphics in video games. But they proved to be so much better than other microprocessors that people think they must have Nvidia chips for their development work. Competitors like ADM and Intel are working on AI chips but so far Nvidia is the leader by far. And they have the programming language for AI. Clearly an industry leader.
What components of the Nvidia chips are AI? How does the chip structure differ when AI?
Sorry, Leap. I don’t know. They came up with their own design for graphics chips. They proved to be well suited to AI. That’s all i know.
They are essentially versions of video chips, which need to be very fast in order to keep up with the rapidly-changing screen images for today’s “action” video games. Given these are the fastest chips, it is their speed that makes them suitable to be used for AI. Design a faster chip and the AI world will beat a path to your door because they want the FASTEST chips (so the code runs at maximum speed). This is not a surprise.
I know there are components in those GPUs and chips. I know they are the fastest and were at least for a long time very simplified chips. What components are the AI?
I know DLSS is one of the AI components.
Most AI algorithms involve an awful lot of linear algebra, floating point math, etc. More importantly, it involves a lot of that math that can be done in parallel with other math operations. In other words, it’s not a long linear stretch of math operations that must be done one-before-the-other. The GPU requires exactly that - lots of math done in parallel (each pixel can more or less be rendered independently and in parallel with the other pixels).
The GPU is mostly a large parallel math monster. This is what is needed to render a graphics scene. It’s also what is needed for most AI algorithms.
You cannot do this with a CPU. Most CPUs might have support for 2 or 4 threads, and you might have a dozen CPU cores per die. When I worked on a mobile GPU at a prior employer a decade ago each shader core could do 32 parallel operations per shader core and they had 6 of them. 192 floating point operations per clock in parallel to each other. This, for a phone mind you. Fast-forward a decade and ask yourself how many shader cores they now have on desktop GPUs…
Since it is called Artificial Intelligence, how is all that math related to how the human mind works? Is it a form of machine intelligence? Is it more of an extreme tool?
In the copyright world do you think an AI work made by a machine should not be copyrighted by a human being? That is a side question.
It is not. What AI does is look stuff up–which a computer does very quickly and (usually) effectively. THEN it starts to put that data together in the way the person asking the question requested. For example, a query for a poem about a particular person/place/thing causes the AI to create that poem.
On copyright, so far the process begins with human input and can be copyrighted. But publishers are already in court claiming AI is copying parts of their publication.
Stay tuned. An evolving area.
That’s way too involved a question for a forum like this. But you could probably do this course, for free, and learn a ton about this:
The TLDR, computer neural networks are based, somewhat, on our current understanding on how biological neural networks operate, and that is modeled in the computer with lots and lots of math.
Not really. AI is not a database where stuff is “looked up”.
Many AI models attempt to simulate neurons and their connections in terms of strength of stimulus and response. Those artificial neurons are typically programmed in layers, with one layer feeding the next.
This link has a reasonable high level overview: Neural Networks 101: The Basics | Algolia
AI has many faces, but bjurasz has it right, although AI can be used to eat databases and texts as fast as it can it can so as to plagiarize and imitate and create fakes on a massively stupifying scale with (I think) stupification as the primary outcome, that is NOT the main push of AI research, nor (I earnestly hope) the AI that matters.
The AI that matters across multiple fields of study examines insane numbers of possibilities at stunning speeds to find useful candidates fitting specific criteria; e.g finding every possible protein that can be constructed from a specified set of amino acids, and then modeling the “behaviors” of those conjectural proteins.
See Now AI Can Be Used to Design New Proteins | The Scientist Magazine®.
No one really knows how either human or artificial minds work.
Some numbers: The human brain has 86 billion neurons with 100 trillion connections. Human neurons fire about 200 times per second. The largest artificial neural network, GPT-3, has 175 billion “neurons” with 1.5 trillion connections. Modern processors run at about 2 GHz . So GPT-3 has twice as many neurons as the human brain, 1.5% of the connections, and calculates 10 million times faster. It’s not clear what these comparisons really mean when talking about intelligence.
The one thing that seems certain is that In 5/10/100 years our brains will be the same but AIs will be better.
Seems to be the main thing about AI is the ability to make its own choices. You may know a relationshio between a few inputs and something of interest, but AI can sort through lots of other data, find some that fit and use that to refine the output.
The connections are not the heart of what I was trying to ask.
The way we think it is not like a larger amount of math. I get a bookie will make odds but that is not the same operation of threads running math in parallel.
How is running threads in parallel intelligence?
@pauleckler human beings sort through things at different levels of intelligence. There are not equals among us in that regard. We are all different.
There is a lot of clean output by AI. I had not expected that. There are artistic successes in finding a new way to abstract. But the quality of the ideas is mixed. I can not fake people out. I can’t add much to the better theoretical concepts of the day. Our better arts are very complex. I do not see AI adding well in this regard as “output”.
As a management system in a GPU that presents a different problem. It is filling into the process. If your FSD means the process is filled in that is scary for a while. Accidents. We will get past that at a cost.
But those connections are precisely what is being modeled by computer neural networks. And those biological ones absolutely involve a large amount of math in how they fire.
That of course is true. Or the thing they are doing. The layer of thought though in a human mind is not oddsbased in that way.
The human mind like the machines is hit or miss over time. The human brings more to bear and learns faster even with few connections. We have more learning systems.
We can drive at age 16. The car FSD is waiting for 2 billion miles traveled to be safe. An unsafe 16 will straighten out his act in far less than 2 billion miles. The difference is all the cars will be FSD from then on but each human being will need to straighten out his or her act.
When it comes to that 16 year old you are neglecting the fact there was 16 years worth of learning that went into that human. Object detection, risk assessment, hand-eye coordination, reflexes, judgement, the list goes on. Humans are not blank-slates when they walk into driver’s education.