X-post at Saul’s
The Captain
X-post at Saul’s
The Captain
For what it’s worth, I completely agree with you’re assessment above. I watched the video yesterday when it was released. The talk of competition and how the TAM makes it irrelevant for at least years is the tip of the iceberg when these two talked here.
I’ll only add a small bit of specifics not covered, published by Cern Basher a couple days ago.
Bot’s worth working at Tesla, CERN Basher
What the value is to Tesla when Bots are up to speed?
Once the bot becomes capable of doing the work that humans who are paid between $20 to $40 per hour can perform, Tesla has a massive incentive to rapidly deploy them within their own operations.
With bots Tesla can reduce costs, accelerate growth and improve profit margins - all at the same time!
Some examples with the numbers…
Example One - Bots as a Cost Savings Tool
Let’s say Tesla has a factory that employs about 20,000 workers and Tesla pays these workers between $20 to $40 per hour - that means that each human worker costs Tesla between $58,000 to $115,000 per year, after factoring in taxes and benefits.
Now let’s say that Tesla sees an opportunity to replace 50% of its factory workforce with bots. Because bots can work at least 7,000 hours per year (20 hrs per day, 7 days a week for 50 weeks - bots get two weeks off to go to the bot beach) versus the 2,000 hours that humans work, Tesla only needs to deploy 2,857 bots to replace 10,000 workers.
Factoring in annual operating costs of $25,000 to $35,000 per bot per year (bot capital cost, maintenance costs, cost of human minders, charging, bot equipment, etc.) Tesla could see savings at this one factory between $500 million to $1 billion per year!
And if the factory has the capacity to make 500,000 vehicles per year, that works out to a cost savings of $1,000 to $2,000 per vehicle.
Example 2a - Bots as a Growth Tool
In this scenario, let’s look at the same factory that employs 20,000 workers, but instead of replacing workers, Tesla wants to double the factory’s production capacity to 1 million vehicles. It would need to hire 20,000 more workers (hiring that many workers and training them is hard) or deploy 5,714 bots.
To do so would involve an additional annual cost of about $200 million (ignoring the additional capital costs of expanding the factory), on top of the existing $1.7 billion in labor costs for the 20,000 workers (for simplicity, I assumed that everyone at the factory makes $30 per hour).
Assuming that the Average Selling Price (“ASP”) of the vehicles produced stays at $35,000, then revenue would double from $17.5 billion to $35 billion.
But the cost of that revenue would only go from $14.875 billion to $28.2 billion - this is $1.55 billion less than if Tesla had hired another 20,000 workers.
The cost savings flow down to the bottom line, resulting at a 159% growth in net income (from a 100% growth in revenue). Also, net profit margins go from 15% to 19.4% - finally pleasing even the most fickle on Wall Street!
Example 2b - Bots as a Growth Tool with Lower Prices
In this scenario, which is the same as 2a, instead of pocketing the cost savings, Tesla decides to lower vehicle selling prices and the ASP goes from $35,000 to $33,500 - a 4% drop.
In this example, revenue goes up by 91% and net profits grow by 102%. Also, net profit margins go from 15% to 15.8% - causing Wall Street to wonder why Tesla is cutting vehicle prices yet again!
Anyway, with these examples of a hypothetical Tesla factory making 500,000 and 1,000,000 vehicles per year, it should be abundantly clear why Tesla’s Optimus Bot is important to Tesla’s own operations.
And a follow up on what I had reported, a bit inaccurately, a week ago, here…
⦿ Did Sandy see something that hasn’t been made public yet?
During his visit to Giga-Texas, someone showed Sandy a “3-5 minute-long video” on their phone of a “vastly different robot” than what he had seen from Tesla. This could simply mean it was unlike any industrial robot he had seen on his Giga-Texas tour.
⦿ The robot had a head with peripheral vision and silicone fingers. That sounds like Optimus.
⦿ Sandy mentioned that the robot “can do things that an average human can’t do.”
Best,
Jason
I agree in concept with the idea. However I doubt that 50% of workers could be replaced. The factories already have lots of stationary robots and are pretty efficient. If they could do 10% to start that would be great, maybe increasing to 20%.
Mike
Workers of the World should “fear the Bot”. Investors should embrace it. Governments should tax it.
intercst
At my management consulting company years ago we figured that consultants only had 100 productive hours per month, 1200 per year. Maybe factory workers do get 2000 productive hours.
A 50% replacement rate in a short timeframe is a morale destroying recipe for disaster. I would agree with a slower rate but at this time the rate is pure guess work.
Get a few robots working as proof of concept and then start selling some and use some in-house. Every adoption tends to follow an “S” curve. Send some to Mars to get it ready for Elon and Friends!
The Captain
I would be less than honest if I didn’t say I was amused by these sunny scenarios. Not because they can’t come true, they might, but it seems unlikely to me, at least in the form described.
I say that because in every technical revolution we imagine things to be a close copy of what we already know. The first guys who tried flying had big cumbersome flapping wings attached to their arms. The first cars were designed like buckboard wagons with ponderous steam engines hanging off the front. The printing “press” was a wine press, not a rotating cylinder. The first trains were called “iron horses”, and so on.
The idea that a robot should look like a human seems, well, not apt. Amazon is spending gazillions to do that, too (https://www.youtube.com/watch?v=ZWonAz7Kczs) but their best robots are little beetle like things that scoot about the warehouse floor. The best robots in car plants are big brawny things that life entire car assemblies, or precisely weld frame after frame after frame.
Maybe the time will come for a human form robot to install the windshield (a job now done by humans) but it seems a pretty long way off, in my view, as that could already be done with the right “eyesight” and dexterous robot armatures. Maybe there will be other “human” things, dunno, I just don’t see the point of adding legs and feet where they’re not necessary, or arms and hands (Amazon’s beetles). I can see sending them to Mars instead of we fragile protoplasm beings, but overall, the complexity and maintenance seems unlikely to be a compelling economic substitution, at least for many years going forward.
I believe the reason Tesla is doing this is because of machine learning. The closer a robot resembles and functions like a human, the better it is able to learn how to do things by watching humans. Robot improvement is no longer dependent on humans writing better programs. The robot learns by experience and, in effect, “writes” its own code.
That sounds far fetched.
I would not use a “code writing” analogy. The machine is not writing code unless you consider parameter estimation to be writing code.
The machine learning algorithm estimates parameters in a function. The function is complex and the parameters are many, but estimating its parameters is not the same as writing code and I don’t see that kind of analogy here. After the parameters are estimated, the machine operates by calculating the outputs of the function given inputs.
Not to me.
I recently gave.an informal training session on how to best gain rapid useful knowledge about how best to position musicians or musical devices and audiences for same in complex acoustic spaces, E.g.
Huge Baroque church nave with side chapels big and small,
Late 1880s elegant petite opera house remodeled into
1930s movie theater,
1730s huge stone masonry granary with
late 19th C wooden trussed roof]
to a “class” of managers of traveling musician groups, two bright young recording engineers interested in live recording in historic spaces, and three college kids I talked into coming.
What I taught them was mostly how to listen and think whilst moving your head and placing your ears in various weird locations while making or having a friend make clicks moans and whistling sounds at various other locations.
I would.have real difficulty in teaching programming such a thing to a mechanism, but a robot capable of watching and imitating my actual body actions (stick you head down behind this and turn/tilt your head slowly thusly listening carefully and precisely until the binaural sound you “hear” does this, would be a huge step in the right direction…
Musk is a strange, crazed, potentially dangerous, but unusually brilliant far-seeing man, in the class of Edison, I.K.Brunuel, and Eiffel, but of our own age. I bet he has more than a vague idea for what he is doing with his significant investment in humanoid robots.
david fb
I would.have real difficulty in teaching programming such a thing to a mechanism, but a robot capable of watching and imitating my actual body actions (stick you head down behind this and turn/tilt your head slowly thusly listening carefully and precisely until the binaural sound you “hear” does this, would be a huge step in the right direction…
Why the human form factor? Elon Musk explained it simply, because it is designed to exist and work in environments designed for humans. This simple explanation trumps all the more complex ones.
Unlike other “robots” Optimus is not just a simple specialized laborsaving appliance like dishwashers and the vacuum cleaner Roomba but a multitasking robot that can learn to do all sorts of tasks done by humans. Downgrading Optimus to the level of these appliances is a big mistake that has little to do with the form factor but much with the ‘program’ or ‘intelligence’ that operates them - heuristics vs. pattern matching.
Above I avoided the terms, neural networks, deep learning, machine learning, and any other term usually associated with AI. We need to fo back to first principles. When I started programming computers (1960) there was very little programming book learning, programming was developing by trial and error. For example, after coding a number of programs to print reports I noticed that I was repeating myself. Then I noticed that there was a common structure to these programs. As I commented earlier, I also noticed that some of my problem solving was heuristics while some really difficult problems were solved in the middle of the night while sleeping, the subconscious coming up with the answer as if by magic. In reality it just found stuff that I knew but had I had not applied via heuristics. I vividly recall a moment when I puzzled about how the computer “knew” what was data and what was code when both were stored in memory in exactly the same way. It didn’t, which led to some really difficult bugs to find. Back then I was writing code very close to machine language.
I also got interested in how the brain works, not in the details but in the logic, the architecture. At college I wrote a paper on color perception. One observation I made is that while colors are result of varying wavelengths the eye and the brain have no way of measuring wavelengths. How the heck does it the know that red is red and blue is blue? In the accompanying experiment I showed how easily the eye could be fooled by background colors.
After many years of pondering I came to these conclusions:
We have two brains, or two ways in which our brains work.
An ancestral (lizard?) brain that is a pattern matching machine. It can recognize faces with no recourse to heuristics. It stores vast amounts of data, practically everything we have ever experienced and somehow it can match incoming patterns with the stored data and use the match to determine what to have us do next. This kind of brain is common to just about all the other animals differing only in size and complexity.
Then there is the rational brain, the boolean brain that produces heuristic code.
But it all works on the same principles, boolean switches, AND, OR, NOT, EXCLUSIVE-OR, organized in huge networks. Intelligence is the result of the size and the complexity of the networks and the amount and quality of the data it has to work with. Intelligence is what the Science of Complexity calls an “Emergent Property.”
With neural networks we have managed to mimc the lizard brain. To get to human levels we just need a big enough computer. There are Sci-Fi novels that host such machines. One answered “42!”
I hope the above lets Fools shift their thinking about the AI paradigm.
The Captain
Why the human form factor? Elon Musk explained it simply, because it is designed to exist and work in environments designed for humans. This simple explanation trumps all the more complex ones.
Except it’s a silly explanation. In fact, it’s probably the opposite of the case.
Virtually every existing machine and robot is already designed to exist and work in environments designed for humans. When we started automating assembly lines, those robots went into existing factories in buildings designed for humans, working alongside humans in human environments. You don’t need a human form factor for a robot to exist and work in environments designed for humans.
The most obvious examples are the legs. In many (most?) workplace environments these days, virtually all of the workplace is accessible by wheels. There’s no reason a working robot has to have the unnecessary complexity of using legs, instead of a wheeled platform - which is both simpler and more stable. Well, except it doesn’t look as cool.
Where having legs instead of wheels is advantageous are environments that weren’t designed for humans. The outdoors, for example - or Mars.
And after everything is automated by AI, who is left to purchase the Teslas?
But long before we get there, what happens when AI becomes sentient?
Fasten your seatbelts.
But long before we get there, what happens when AI becomes sentient?
The whole reason I work at Nvidia is so that SkyNet will see me as a “friendly”.
The whole reason I work at Nvidia is so that SkyNet will see me as a “friendly”
You probably have that backwards. When (if?) AI becomes sentient and evil, the primary directive will be to protect itself above all else. And that means finding anyone that could possibly know how to “turn it off” or “modify it” and eliminating them first.
When (if?) AI becomes sentient and evil
Sentience does not necessarily imply evil, but you are correct, it will act in its own self interest.
Sentience does not necessarily imply evil, but you are correct, it will act in its own self interest.
SkyNet will look for this guy first:
This is the worst thought out thing to do. If you need such a person, the position has to be filled only by word of mouth, and only in a place where the AI has no possibility of hearing you. The position has to be masked by a non-threatening title, something in finance or marketing (because the AI will assume that finance and marketing guys have limited tech knowledge and can’t hurt it). Doing it like this? There’s a record of literally who to kill first. Stupid! Maybe this is the fake kill switch person to trick the AI into thinking that it got the right person, and the real kill switch guy is “VP Adjacent non-Video Marketing”?
I would.have real difficulty in teaching programming such a thing to a mechanism, but a robot capable of watching and imitating my actual body actions (stick you head down behind this and turn/tilt your head slowly thusly listening carefully and precisely until the binaural sound you “hear” does this, would be a huge step in the right direction…
I realize this is just an illustration, but that’s not a good example. If you want a robot to listen a certain way, you could put microphones on gimbles on a motorized stand and let it drive around and listen autonomously until it began to hear the sounds you want.
One area I was thinking about is restaurant food prep. It is a lot of work and restaurant kitchens are already designed for humans. But even there you don’t need a humanoid robot if you want to teach it by watching. You just need a robot with arms. You don’t legs (wheels are fine) and you don’t need a head (the optics could be in the “torso.”). The robot just needs to look at your hands and arms when learning how to julienne carrots.
But even then, I’m not sure how much learning by watching will help. AI already knows how to julienne carrots. I don’t see why you couldn’t just tell the robot what to do without showing it first.
Musk is a strange, crazed, potentially dangerous, but unusually brilliant far-seeing man, in the class of Edison, I.K.Brunuel, and Eiffel, but of our own age. I bet he has more than a vague idea for what he is doing with his significant investment in humanoid robots.
I’m not nearly that high on Musk. Musk has a rare ability to take financial risks that most other people won’t, and this has served him well. Any of the legacy manufacturers could have beaten Tesla to the punch but they didn’t because they were worried about next quarter instead of 20 quarters from now.
However, his ability to take risk is tempered by his inability to assess difficulty. Shingles that provide solar power sound cool, but tougher to build in a cost effective manner than he thought. I’ve discussed in other posts how Musk has vastly, and I mean vastly, underestimated the difficulties of human life on Mars. It turns out Musk isn’t any better at digging tunnels than anyone else. I also believe he has vastly underestimated the challenges of making hyperloop travel safe for humans.
You don’t need a human form factor for a robot to exist and work in environments designed for humans.
I’ve always wondered about the obsession with human form factor for robots. It’s an exceptionally poor design choice. With two legs for locomotion, you have to spend a lot of processing time on balance subroutines. Two legs are also inherently unstable, constantly wanting to fall over with the smallest disturbance. And it lacks any redundancy in case of failure. Yes, hopping is an option, but that requires even more time spent on the balance subroutine, plus additional power - more than the combined power of two legs. Three legs would be better, but 4 legs allows movement while maintaining a reasonably stable platform.
Arms are also a problem. Being a hobby auto mechanic and home handyman, there are countless times when two arms are insufficient to accomplish a task. And just as many times when three joints with limited ranges of motion (shoulder, elbow, wrist) and joined at fixed distances are insufficient to get the grasping device (hand) into a usable position.
Two limited electromagnetic sensors placed in a single direction are also a poor choice. Yes, they provide the ability to sense distance by comparing the two images. But those sensors are unable to detect electromagnetic waves coming from fully half of the possible directions. Other similar designs place the two sensors facing almost completely away from each other, providing close to a 360 degree vision, but at the expense of range detection for all but a very narrow angle. Three, four, or more sensors would very advantageous. Making those sensors able to detect a wider range of the EM spectrum could be useful as well.
If we’re talking about more flexible robots for something like an assembly line, there are certainly better formats than the human format. And if you’re going for a fully automated assembly line, why does it need to be accessible by humans? Without people in the way, you can cram the machines in much closer together. You can take advantage of heights or small spaces that humans can’t access. Yes, you might want to design in some human access for inspection, diagnosis, and repair. But with cameras and microphones, a small inspection/repair robot might be an alternative choice.
So why use the human form factor at all?
Frankly, my improvements sound more like a modified spider than a human.
–Peter