AI: Trough of disillusionment

Most new tech usually hits what they call a “trough of disillusionment” at some point in its life cycle. I wonder if AI has already hit it (in the 00s and 10s after decades of AI talk without any substantive results) or if it has yet to hit it and will hit it sometime in the future? Of course, not all tech follows the same pattern, so it may never experience it. Any ideas or comments about it?

1 Like

Mark:

You are talking about the Gartner Hype Cycle.

https://www.gartner.com/en/research/methodologies/gartner-hype-cycle

AI is an outlier in that the early versions were a total bust to the point that AI was pronounced dead. Tamaño disillusionment! The computer technology simply did not yet exist. We are now seeing AI v. 3.0 based on neural networks instead of on expert systems or on thousands of lines of code (heuristics).

In the Old Fool we had a very interesting discussion about the real beneficiaries of technology, suppliers or users. Users won. Nvidia is mostly a supplier (chips and software) while Tesla is a user, FSD, RoboTaxis, and Optimus robot.

For investors the question is, “Who can best monetize AI?” Andrej Karpathy, formerly with Tesla, has confirmed that size matters. The human brain has billions of neurons and trillions of synapses (connections). The largest AI data centers are tiny by comparison. The recipe, then, is to have mountains of cash to build the data centers and markets for millions or billions of AI powered gadgets.

Two stories come to mind:

Intel built Fabulous Fabs and for a time it was a fantastic investment because it powered most personal computers and data centers.

Apple developed the Human Interface Guidelines for the Mac (based on prior research by Xerox at PARC) and this technology is what sets the iPhone apart from the rest.

Intel supplier – very cash intensive
Apple user – increasing returns on steroids

https://karpathy.ai

The Captain’s
bet is on Elon/Tesla

5 Likes

We do hear of huge investments in AI both hardware and software. No one wants to be left behind. But everyone wonders if all those company projects can make profits.

When will AI hardware markets reach saturation? They require lots of data center capacity. When will data center capacity reach saturation?

Or will both become obsolete and require continuous upgrades to keep up with technology?

I think betting on AI hardware/software is not the best bet. As I mentioned above, I would bet on the best AI user, not the best AI supplier (hardware/software). Who has the gadgets that can monetize the AI?

The Captain

1 Like

I don’t think for a very long time.

In related news:

https://www.businesswire.com/news/home/20241210488084/en/Intersect-Power-Forms-Strategic-Partnership-with-Google-and-TPG-Rise-Climate-to-Co-Locate-Data-Center-Load-and-Clean-Power-Generation

The partnership is designed to deliver gigawatts of new data center capacity across the US with Intersect Power catalyzing a targeted $20 billion in renewable power infrastructure investment by the end of the decade.

I keep reading that electricity demand for data centers is predicted to triple in just the next six years. If factual, then hardware buildout would also have to increase at a related pace.

1 Like

In early 2000 Rambus had an agreed upon growth rate of 500% per year for the next 5 years.

Less than 3 months later it was impossible to find the 500% growth rate on any online service. Magic how that happens.

If anyone here is waiting to be told the ride of over…better to make your own guess at it. The wait will be for a fact that is far to late.

The issue right now is when not if…it could be months later…but when do the US equity markets begin to trade at negative deviations to the value trend line.

The financial capital of the world is now Frankfurt.

2 Likes

Given the news about the German economy, one might wonder.

It had been London but with Brexit two things were happening. Banks in London put capital into Frankfurt. Supply side economics and the push to lower taxes went from US/UK to Germany/Central Europe.

The measure of course is not right this minute. In the long run Frankfurt.

VW be damned.

1 Like

AI is advancing at warp speed never seen before. The fundamental requirement for AI is size. The human brain has billions of neurons and trillions of synapses. Until now experts though that AI data centers were limited to 3 * 10^4 Nvidia chips. Elon broke that barrier. Google’s quantum computer might break the compute speed limit.

The Captain

1 Like

Is it? My understanding is that they’re somewhat plateauing - the latest versions of the various major LLM’s showing relatively modest gains compared to their previous models. The limiting factor being the input data.

We’ve now reached the point where all the free data - the entirety of human written and visual content that’s on the internet - has been used as training data. We don’t have a second internet to move on to, and getting non-internet data is vastly more expensive.

So is AI advancing at warp speed? Or has it reached the point where improvement is now only incremental?

LLMs might have a data limiting problem but there are other areas that do not have such limitations, like self driving and humanoid robots. What is plateauing is one AI application, not AI itself.

BTW, can LLMs be monetized as highly as self driving and humanoid robots?

The Captain

I would disagree about humanoid robots. They face an even more daunting data problem - there’s no pre-existing large collection of non-visual data to train them on, and not really a big collection of useful visual data, either.
There’s no analog to the internet’s massive store of written and visual data, or the driving data that Tesla was able to collect at minimal cost. The humanoid robot makers will have to manufacture those datasets by hand, and that kind of artisanal hand-crafted data is very expensive.

1 Like

Huang has talked about the large potential beyond language based models. One example is where entire factories can be done (industrial AI). First program in all the necessary laws of physics and chemistry. Then model your manufacturing processes for, say, a particular auto part and the engineering involved. Optimize for efficiency, reliability et cetera.

DB2

AI driven humanoid robots are just starting out. As they come off the production line they can be trained in a variety of jobs. Tesla is already doing it. Greenfield ahead!

The Captain

How can they be trained? They need gobs and gobs of data in order to be trained, under current methods. That data doesn’t exist for doing tasks in the real world.

For example, you can stick a robot (or more realistically a bunch of cameras) in a poultry-processing plant and watch them break down chickens - but that doesn’t provide any of the tactile or haptic or weight data that a robot would need in order to actually do that job.

So where do you get the massive amounts of data, when it currently isn’t being collected?

In this case it would have to be through experience and reinforcement learning. Let them hack enough chickens, get enough good/bad feedback, and hope they eventually figure it out. Very, very costly way to train them, by the way.

Count me in on the trough of disillusionment side of things.

1 Like

The same way they collect data for FSD. People disdained the robots at the RoboTaxi show because they were tele operated. Tesla EVs are also tele operated except the tele operator is sitting in the driver’s seat. In the poultry-processing plant you put some tele operated robots to work and they get “all the tactile or haptic or weight data that a robot would need.”

The Captain

1 Like

But that’s not the same way they collect data for FSD. It’s much more difficult and expensive. Collecting data from cars is cheap - it’s entirely passive, doesn’t require any changes in how the car is operated, and you have thousands and thousands of them.

Collecting data from teleoperated poultry robots isn’t at all like that: you’ll need to replace super-cheap laborers with more expensive tele-operators, those teleoperators won’t be as fast or efficient as the laborers they replace, and you’re only going to be able to have a small number of them (large numbers would impair the production of the poultry plant).

The rough analog is if Tesla couldn’t collect FSD data from any of its customers, but only from cars that it owned being driven by its own employees.

1 Like

The ImageNet database was built on a shoe-string budget, using low-wage foreign labor to label images that were, basically, obtained for free. FSD being trained by normal drivers, for free. LLM’s being trained on internet data, most of it copyright protected, without compensation. Would AI have succeeded if not for this? And is this a good thing or not? My views are seriously changing about all of this.

You can use the workers themselves as tele operators. Dress them up in a suit that sends the signals and they do their work as usual.

https://www.researchgate.net/figure/Teleoperation-exo-suit-and-humanoid-robot_fig2_320084042

The Captain

1 Like