Robot teaches itself to make coffee

Sure - but factories that do have assembly lines do have the space. Yes, it costs more, but the efficiencies to be gained from an assembly line are unparalleled.

Do you have more than one screwdriver in your shop?

Yep. That’s been my whole point. The brains that can be put into a humanoid robot aren’t anywhere near what they would have to be for the humanoid form factor to be useful. Enthusiasts are excited because they can use AI to replicate the sort of tasks that robots have been able to do for two or three decades, but that doesn’t change the status quo at all.

1 Like

You are not listening to me Albaby, what I said is that if I can have one tool that does it all I would buy it.

No that is incorrect. I do not know if you are right or the people in the video are right. What I am saying, and I want to be clear about this, is that I am open to the possibility that the people in the video are correct and you are stuck in the past.

Andy

1 Like

Sure. But there exist screwdriver sets with removable heads, so you only need one tool to serve as a screwdriver for any job you might need. The problem is that those screwdriver sets, while maximally flexible, tend to not function as well as a screwdriver with a set head. So even though I don’t have a shop (or even a garage workshop), even I own a set of screwdrivers in various sizes with flathead and Phillips. Takes up room in the toolbox, but I’ll use a “real” screwdriver any day rather than a multi-tool screwdriver.

That’s why I asked - to illustrate that we don’t always choose flexibility over being designed for the specific use.

Oh, I’m open to that possibility as well - but I’d want to see some evidence of it before believing it. Namely, a robot doing something that it learned to do with AI that can’t be done with conventional programming. Putting a Keurig pod in a coffee maker is something that a high-school robot club could have done a decade or two ago.

1 Like

Well I always choose my drill with flexible bits. It is more flexible than a hand held screwdriver.

Can you show me a video of a high-school robot from 2 decades ago that is putting a keurig pod in a coffee maker or doing anything that resembles that? Or could you show me an example from 2 decades ago of a robot correcting it’s mistake when it couldn’t insert the keurig pod correctly as show in the video at 5:06 minutes into the video?

Andy

Sure - here’s video of an object being placed in a similar sized location from a robotics competition from 2014 (one of the first things that came up):

Obviously, Keurig pods didn’t exist back then.

No - I don’t think there would be any videos of that. Most robots of that era would be designed to do the job exactly right with an exceptionally low error rate - they’d pick up the object and put it exactly in the right spot within tolerances every single time. If it didn’t go in the right spot, they’d just start over - grab the object, return to start, and try again.

The video looks super cool, but only because the robot looks humanoid. If that was just a disembodied “robot arm” mounted to the table with a camera locked on the “wrist”, using a pincer to pick up the pod, it wouldn’t look very futuristic at all - like this video from a few years ago:

…or this effort from 2016, when it was Google that was the company that was dominant in one field (search) but was spending tons of money on side projects to be the Next Big Thing in some unrelated moonshot field like autonomous robotics:

Again, this just looks like incremental change in neural network robotics, dressed up in an Omni magazine cover aesthetic.

That is 1 decade but I see your point.

Well a robot that can see it made a mistake and then corrected that mistake is more than incremental. It shows that there is a process there. Was it programmed or was it learned? If it was learned it is a huge step forward, if programmed than you could call it incremental. We are going to know by the end of the year who is correct.

Andy

I guess? I mean, robotics labs have been doing that sort of stuff for years:

Why by the end of the year?

Now we are down to years instead of decades and that talks about AI improvement. See what is happening?

They are talking about have one up for production this year. Probably not perfect but good enough. We will see.

Andy

1 Like

No, I don’t. These guys have built a humanoid robot that is capable of doing stuff that…we’ve known how to do for several years already? How is that a big deal - again, other than cosmetically?

Oh, I have no doubt they’ll do that. “Fake it 'til you make it,” and all that. No matter how much (or little) this thing can actually do, they’ll sell some - and some people will buy them. That doesn’t mean that they have enough functionality to have any kind of significant impact on things. Again, it reminds me of the Segway - enthusiasts swore up and down that it would have a massive impact, but in the end the use cases were so limited it ended up being little more than a niche product, despite some very impressive technology.

1 Like

Which is part of its programming. It is essentially using “trial and error” to eventually get to a solution. If one way does not work, it tries another. Otherwise, it could never learn.

I have been watching the evolution of this area for almost 40 yrs now. Several people I know were working on early robotics, vision systems, and more. So it is about 4 decades–thus far–and counting.

In the video they said they would train it by vision and not by trial and error. The robot watches a human do the job or a human directs a robot on how to do the job for a few hours, then the robot knows the job. After the robot knows the job it can then OTA to other robots how to do the job instantaneously. This is all stated in the video, so like I said, If that is true it is game changing. We will see.

Andy

1 Like

Albaby you keep glossing over the things you do not want to believe or that don’t suit your “vision”. This is not “stuff” that robot’s could always do.

In the video they said they would train it by vision and not by trial and error. The robot watches a human do the job or a human directs a robot on how to do the job for a few hours, then the robot knows the job. After the robot knows the job it can then OTA to other robots how to do the job instantaneously. This is all stated in the video, so like I said, If that is true it is game changing. We will see.

Andy

1 Like

Interesting. I made the mistake of looking at the actual video - the source - rather than paying much attention to the reaction video, where they were just speculating about what was going on. Remember, the video that was posted upthread wasn’t anyone from Figure, or anyone with actual knowledge of how the robot was trained - so I went to the actual video they were doing their “reaction video” to.

Turns out that all of this isn’t really driven by the video itself - which contains virtually no audio - but the tweet:

…where the CEO says that “Our AI learned this after watching humans make coffee.” Which…doesn’t really support all the breathless speculation among the two reaction video hosts? Because he’s not actually saying that they only used watching humans to train it. I mean, we know from the other video frames - the “learning to fix its mistakes” stuff that there was some trial and error going on.

It certainly is somewhat interesting that the AI was able to take just video inputs and translate that into action. But again, at the highest level of generality, that’s what FSD and other proto-autonomy systems have been able to do for years as well. If the robot got an approximate sense of the task from watching, but required a fair amount of trial and error to get it right, it’s not quite as significant, I think.

One of the hosts is a robot engineer that founded two robot based start up companies.

https://twitter.com/goingballistic5?lang=en

Which he seems to think does.

Andy

1 Like

Sure - he’s super excited about this stuff! But he doesn’t appear to have any information about the specifics of how that robot was trained (other than the wordless 80-second video and the handful of sentences in the Twitter post). It’s fun for a youtube channel to sit and speculate for a bit about the tea leaves behind that, and the video footage sure looks cool.

It’s certainly, absolutely possible that a specialized AI/robotics shop like Figure might have done something truly groundbreaking - but also possible that they’re presenting the absolute best possible promo video with descriptions that are chosen to hint that they’re doing something amazing without actually committing to that (in a way that might lead to investor lawsuits).

Either do you yet he is an expert in the field…

Andy

Is he? He founded two software companies that made simulation software for manufacturing companies. Nothing in his background relating to AI or neural net learning, near as I can tell:

Though he it appears he is a huge fanboi of humanoid robots.

Plus, I just looked up Figure - or rather, Brett Adcock the CEO, since there’s not a lot about the actual company. Adcock’s a serial entrepreneur, not a robotics guy - he founded a hiring marketplace company, then when that got bought out he founded an e-VTOL aviation company, and then when that went public he founded Figure a year or two ago - poaching a bunch of guys from the other AI shops.

So I think we should all view that promo video as being prepared to maximally entice the next round of VC funding and build buzz for hiring, and thus take with a grain of salt any elliptical claims about how it was actually trained.

1 Like

Actually here is his background

Mechanical and Aerospace Engineer | Robot Offline Programming Pioneer | Factory Simulation Expert

And from your piece.

Scott: Before Visual Components, I founded a Michigan-based robotics company, Deneb Robotics, which kept me busy for over a decade. As a child I was supposed to become an astronaut, but my university classes introduced me to 3D CAD tools. I figured that robots are pretty cool, possibly even cooler than spacecrafts, and decided to give robots a few years. Well, I’m still here!

So I think you are trying to dilute his expertise. Why?

Andy

2 Likes

A holdover from meteorologists or electrical engineers claiming expertise to deny climate change, or some neurologist using their medical background to hold forth on why Covid masks don’t work when they have no background in virology or epidemiology. And my own background of occasionally having to point out in public hearings that the resident who (correctly) said he was an engineer actually isn’t qualified to conduct a traffic analysis, because they’re a civil engineer and not a traffic engineer. So I’m always stirred to ‘look under the hood,’ as it were, when someone notes that they have a technical background (like “engineer” or “doctor”) that can be in a completely different area than the one they’re talking about.

Deneb was also a factory simulation software company. If you wanted a program that would simulate fluid flow through your plastic press, for example, they were your guys. But they weren’t doing AI or designing robots. This guy is a software engineer who designed simulations for use in designing factory equipment - which certainly makes him an engineer, but an engineer who’s never done anything in the specific field of AI he’s talking about. Certainly gives him more background knowledge than, say, a lawyer - but not someone who would qualify as being more than a well-informed lay person to talk about these things.

So with a skeptic’s eye, the Figure video is basically a visual press release. They’re trying to hype up their business, which will help them with both recruiting and attracting later-round VC funding. Which is fine - Boston Dynamic used to do these highly-produced videos, too, to make people excited about their robots. But it’s not something that really signals that there’s been some dramatic change in the capabilities of these robots that warrants too much excitement.

1 Like

Albaby I am starting to realize that even if you have no idea of what you are talking about you will argue it into the ground. You are being pedantic. Like I said, We will have to agree to disagree.

Andy