I’m reminded of the fiber optic build out during the dot com boom. Companies everywhere were installing fiber anticipating an explosion in demand.
But they over built. Capacity far exceeded demand resulting in much “dark” fiber.
I think we anticipate the same could happen with AI. But no one knows when. One quarter or one year at a time. For now growth and rapid expansion continues.
I can’t imagine that a random poultry processor could do the job of a teleoperator without some significant training. Or that they could process the poultry as efficiently using teleoperation as they could by hand. If even the haptic feedback is precise enough for this to work at all - or without custom-designing the teleoperating suit to the user.
That’s demoralizing. It’s as bad as being told “we are letting you go, now go train your replacement before you leave.” Will they be paid more to be both a chicken-chopper and a robot trainer?
I get the point here, but I wonder if it is really true. If a human shows up with no prior experience, they can be shown what to do and then they can start doing it under supervision and they will learn those non–verbal, non-visual things necessary to do the task. Of course, the human may have had some prior experience that may make the training easier, even though they have no direct experience with this task.
But, might the same not also be true of a robot? Basic generalized training at source to recognize weights, familiarize with motions and forces, etc. And then some visual training and supervised experience “on the line”.
Same data set, bigger computer cluster, better FSD
Seeking Alpha comment:
Some important (for the Bulls) or trivial (for the Naysayers) information about Tesla FSD for Seeking Alpha readers:
Tesla FSD 13 took a couple of months to develop using the Cortex cluster and the Dojo supercomputer in Giga Texas, utilizing approximately 50,000 NVIDIA H100 GPUs and 20,000 units of Tesla’s custom AI hardware—a fraction of the time it took to develop FSD 12.
As of December 13, 2024, xAI’s Colossus supercomputer holds the record for the highest number of GPUs in a single system, with over 100,000 NVIDIA H100 GPUs. It is designed for large-scale AI research, including developing advanced AI models, supporting Tesla’s autonomous driving, and working on potential Artificial General Intelligence (AGI) projects.
Answer this question: how long does it take to train a young human to drive a car? Something on the order of 40-100 hours? Or 16 years?
Hint: that 16 years includes a lot of skill development, such as object detection and identification, risk assessment, judgement, hand-eye coordination, etc. etc. etc. And all that is leveraged into learning to drive the car.
Well, but hardly 16 years of 24x365 intensity training and no small part of it developing self mastery at basic tasks which the robot would be able to do “at birth”. Plus, in the same sense that the FSD in a car is simply installed after development over time on a much bigger computer, doesn’t it seem likely that one or more “basic skill packs” could be installed in the robot and not require any training?
A very subjective metric. But while many only are able to see the end result of any particular AI tool, the big gains in capability in LLMs has spawned many new hardware projects, software framework projects, dataset tools and software in general.
So while teams of developers are working on faster more efficient methods for filtering out bad data in datasets…no one on the outside really see this.
Big teams of hardware and software engineers are collaborating across many big companies to define more efficient standards for more hardware supported variable data types that can more than double the throughput during inference…no one will see most of this for an entire hardware design and validation cycle.
Researchers are using new profiling tools to analyze what actually goes on inside the crazy math of deep neural nets and designing new models that are more efficient via a smaller memory footprint and smaller compute requirements with higher accuracy…again no one will see this until new training hardware arrives, then new massive AI clusters are built and new models get trained and tested.
Lots of new research is going on combining text with images and video…besides all the deep fakes, but this gigantic models need to be trained on big datacenters and optimized for running inference on phones and AI PC laptops that only use the cloud for a fraction of the queries.
End user visible results are likely to be very lumpy.
Of course. If there’s one constant with Elon Musk, it’s that he’s consistently inaccurate in assessing the pace of technology for AI systems. He’s been wrong for almost a decade in predicting when autonomous driving would be here. So I think it’s very likely that he is woefully underestimating what will be necessary to develop AI for robots.
You wouldn’t be as rich (no one’s as rich as he is), but you might indeed be richer. You can’t have successes without risking making mistakes - the more mistakes you’re making, the more chances you’re taking.
But just because Musk is very successful while he makes lots of mistakes doesn’t mean his mistakes aren’t mistakes. He can be very optimistic about new technologies, but that doesn’t mean every Hyperloop or Solar Roof will come to fruition.
I don’t think Musk expects that his robots will need years of training by employees in full haptic tele-operation suits before they can do a job. I think he’s just optimistic that such steps won’t be necessary.