But, I think, you only think that way because for your whole life this has been happening. If you go and study how ENIAC worked all the formulas were basically hard wired. There was no terminal where you typed in a program in ~english-like text and a compiler to convert it to the machine’s language.
No it can’t. But the tools and process used to create and train DALL-E are pretty much the same as the tools used to do most other AI tasks. There are multiple different tools sets that are similar, but better at different things, such as the “frameworks” like Tensorflow, PyTorch, MXNet just like there are different programming languages that one might use to solve other problems – like Fortran, C/C++, Basic, Pascal, Python (and a hundred others).
I think that was and is the point of Tesla’s AI Day. To recruit the best of the best to solve all these last few percent problems and build it. Not easy for sure, but probably not rocket science either. I doubt they will meet the stated time line. And I think the reality will be that the first products are much more restricted in what they are able to do. Maybe work on one stage of an assembly line in a car factory.
I recall working on big SW projects with a few dozen people that took overnight to compile and build, never mind the slow regression testing and manual testing after that. Progress was very slow due to the long turn around to test the most simple things. Lots of time was spent just trying to optimize the infrastructure. Many current ML models take many hours, days or even weeks to train. Iterating to fix things takes a LONG time. Improving this by 10x, for example, can get you more than a 10x productivity improvement just because the programmers can try more things.
Did you see the Tesla Dojo project?
What genius college grad wouldn’t want this?
Mike