Are We on the Cusp AI Controlled Autonomous Automated Factories?

NVIDIA today announced that major Taiwanese electronics makers are using the company’s technology to transform their factories into more autonomous facilities with a new reference workflow. The workflow combines NVIDIA Metropolis vision AI, NVIDIA Omniverse™ physically based rendering and simulation, and NVIDIA Isaac™ AI robot development and deployment.

“AI for manufacturing is here. Every factory is becoming more and more autonomous due to the transformational impact of generative AI and digital twin technologies,” said Deepu Talla, vice president of robotics and edge computing at NVIDIA

Back to 2% interest rates at the price of higher unemployment?

3 Likes

The next step will be AI automated robotic consumers.

2 Likes

The next step will be AI automated robotic consumers.

Every time I buy printer ink at Office Depot, they try to sell me their automated ink resupply program.

Steve

Eli Lilly is building a $5.3 Billion factory in Indiana to make weight-loss drugs. It only requires a staff of 700 people to keep it humming 24/7.

This brings the total investment at the Lebanon IN facility to $9.0 Billion. Eli Lilly said 900 employees, including engineers, scientists, operating personnel and lab technicians, will staff the site when it is fully operational.

So at our current state of automation, a $1 Billion investment only creates long-term employment for about 100 people.

Eli Lilly to invest another $5.3 billion in Indiana plant to expand Mounjaro, Zepbound supply (msn.com)

intercst

4 Likes

How many AI bots will need weight loss drugs in the future?

1 Like

That’s why we need universal health care – to allow the unemployed to be able to continue to afford their $1,000/month Zepbound prescriptions.

intercst

1 Like

AI consumers?

I read something about 47% of the internet is bots. AI will push that up into the 90% plus range.

AI bots are not honest…all of them will need weight loss drugs.

Thus has it always been, probably always will be. Factories will always invest if something will make their process more efficient, but only if it’s more-efficient-enough to justify the cost over time. Sometimes that results in layoffs of people who used to do what the machines now do, sometimes it means that the company can produce so much more that the people currently employed just get redeployed into doing other things.

Luckily we have found that there are whole new worlds in “leisure time” and entirely new industries have popped up - employing even more people doing things which had never been done before. That requires the owners to acknowledge (give) additional time off, reducing the need for workers in industry one, and allow them to partake of industry two.

That happened in a big way when Ford went to 8 hour shifts to run 3 of them around the clock (and of course the rise of unions over the next several decades). Prior to that most factories worked on 10 or even 12 hours days. Eventually Saturdays got dropped from the work-a-day, and over the rest of the 20th century things got tinkered with around the edges. Some places let workers slide to 35 hour weeks. Sometimes you could eat while you worked and still count it as work-time. And then … that sort of progress stopped, or at least decelerated dramatically.

(Side note: in China and other developing countries they are just learning the joys of “leisure time” and finding new service industries to serve it. We’re so far down that road that we have services that book our leisure travel where we have people who cook for us, entertain us, change our sheets for us, all on giant ships constructed expressly for the purpose. For example.)

It doesn’t have to mean “unemployment.” In fact it can mean “lots of other, different employment”, hopefully run in competent, ethical manner.

3 Likes

I am with a major firm. My position is not all that and a bag of chips. I do well but nothing to write home about.

We have a computer in our office that is not running well. We have had someone into service it every other week for one year. The computer people are not the original developers. The computer people are just taking up a seat and getting paid. Seriously.

If you want ethical behavior which person will you find for that?

I’ve been thinking about AI a bit lately (hasn’t everyone?). While AI is clearly impressive, and will change a lot of things, I am not sure how quickly it’ll really happen. There will definitely be companies that embrace it wholeheartedly and go ahead with installing it and allowing it to control more and more operations. However, I can easily envision an AI screwing up bigtime. Maybe it’ll ruin a big batch of expensive pharmaceuticals. Or maybe it’ll completely destroy an entire pharma plant, or worse, maybe it’ll formulate a drug that initially appears to be effective, but long-term causes great harm. Or maybe it’ll cause unexpected adulteration in the food supply (like add glue to pizza or similar). When something like this happens, I would expect a great halt to happen, and all AI plans will be put on hold until the issues can be further investigated. I wonder if it is even possible to prevent this from occurring in the first place? If you put too many limitations on an AI, then the “I” part can’t really flourish. And then it isn’t worth nearly as much as you expect it to be.

1 Like

Why would the FDA and other regulators change the way that drugs get approved? I don’t think that they should care (much) whether the drug was created by AI, hundreds of scientists working hard for a decade or some happy accident (penicillin as an example). A new drug should have to go through the same trials that prove its safety (within some level of certainty) and its effectiveness before general approval.
No?

Mike

1 Like

This is the thing I worry about. I call it AI’s sock puppet moment. Something that makes it all come crashing down before rebuilding yet again. Its own version of the dot-com bust. I don’t know if it will happen, but I do keep a watch out for it.

In some ways the parallel does not exist. During the dot-com we had lots of companies with no real business plan, no revenues, lots of losses but which were very highly valued. That was bound to fail.

At least right now there are some areas where ML/AI is already doing real work and making real profits. Not everywhere of course, but it’s already reasonably successfully in many ways. Doesn’t mean we won’t get another Pets.com though. Sorry, that would be Pets.ai.

Because they are using AI to speed the process along!

It was just an example. There are all sorts of examples that can be envisioned. Allowing an AI to build a skyscraper, and allowing an AI to verify that the plans meet code. Allowing an AI to run parts of your military. Etc.

Apparently industry has invested $50B into AI so far. What is the ROI of it at this point?

Well, it’s been very high lately. Very high. I mean just look at… or at… wait a sec… I mean are you really doubting AI? The companies raking in massive ROI is a list, like a stellar list, possibly involving counting fingers on both hands. At least one hand.

Excuse me while I contemplate my sell orders for tomorrow morning…

No. Doubting AI would be similar to doubting the Internet in 1999. But that doesn’t mean that success (ROI) happens RIGHT NOW. It could happen in 5 years from now. Or 10.

I worked for a startup beginning in the late 90s. We had huge dreams of corporate jets and options growing to tens of millions. Most of us were true believers and did not sell our shares (options) upon vesting. Two of the guys always said “options are part of our compensation” and they sold most of them each time they vested, and we were perplexed at their behavior. Well, us true believers had many of our older options expire worthless after 10 years due to a 90+% drop in the stock price. Later we got smarter and began selling shares periodically (“part of our compensation”).

The way I look at it is as follows. If you have Nvidia options, the bulk of the gains have already been accrued (100X or so). Now that it’s worth about $3T, it won’t see 10X again, it won’t see 3X again, heck it may not even see 2X again, from here. So lightening up and selling some is probably prudent, you can lock in the 100X, but even if it goes up, at most you’ll miss another 2X or less. I never say to sell all, because it’s good to participate in future upside (and you can’t really sell “all” anyway because vesting is usually 1/4 each year over 4 years, and each year there are new options received). So even if you try to sell “all”, you still have 1/4 of 2021, 1/2 of 2022, 3/4 of 2023, and all of 2024 options remaining.

1 Like

Yes, and the real crux would come from AI taking over the approval process, and I doubt that happens any time soon.

d fb

1 Like

Why do you doubt it? If a new drug takes 10 years to come to market today, and AI assistance speeds the initial discovery part up by 10 times faster, so now the initial discovery part is a month instead of a year, will we be satisfied for that new drug (that potentially saves/improves thousands of lives) to come to market in 9 years and 1 month instead of in 10 years? Or will we “use AI” to speed up the approval process as well?

As long as the speed up in the approval process is based on using the AI to crunch the data from the trials. I would hope that the same set of human trials would need to be done…which requires actual humans to actually take the drug (or placebo) and for a given time period . After some time the analysis of the data is done to see if the drug worked, etc. and for absence of significant side effects.
Note: exceptions for emergency approvals for things like vaccines such as for Covid.
IMO, we shouldn’t just assume that AI will suddenly make everyone at the FDA into idiots.
Instead we should look at things that AI can do effectively (or not) and allow the technology to progress on tasks that are easily overseen by humans and/or have no serious repercussions if there is a failure…things like identifying what is in your photographs or vacuuming your floor are on the easy side. When to push the nuke button is at the opposite side.
Weather prediction is a good candidate since it is already known to be not 100%.
Early/earlier detection of cancer is a good idea too, but as an aid to a doctor not a replacement.

Mike

1 Like

Would it? Or would it be similar to doubting blockchain in, say, 2019.

I’ve started to see some more skeptical takes of AI, because unlike the 1999 Internet, there might be some real limits to AI taking off. Specifically data limits. Almost all of high-quality data that’s out there has already been “used up,” as it were - it’s already been utilized in the training that’s gotten us to this point. All the human-generated that was publicly available on the internet circa 2023 has already been fed into the AI’s.

And we might be close to Peak Data, at least for AI training purposes. Owners of those huge datasets are starting to get antsy about having their data harvested for training purposes, and the legal environment for doing that might not be this permissive again. And that data may never be as good again, because the roll-out of today’s AI’s is starting to push unidentified generated data into the set of all data, and things get real squirrely when models start training on already-generated data. AI ends up eating itself - it starts displacing humans generating data, replacing it with AI generating data, which data can’t be used to improve AI.

While it’s true that today’s AI is the worst AI you’ll ever see, the future AI may not be significantly better than today’s. We’re starting to see signs that improvement in AI performance starts to become asymptotic to a ceiling, rather than exponential to increasing performance. IOW, that after you go from no AI to mediocre AI, you need exponential amounts of additional data to get incremental improvements…which is just not that cost-effective.

So…maybe this is our blockchain / metaverse moment? Where something is hyped up to be revolutionary, and is certainly useful in some cases, but ends up being just not useful enough to be worth the effort? I don’t know - in the law, at least, a lot of us ended up trying out AI systems a bit and then just…stopped, because it wasn’t worth bothering given the possibility of catastrophic mistakes.

3 Likes