One negative outlook on autonomous driving

Sorry, had to respond to this as well. Nobody has written a program “directly” in at least 50 years, and even then only fairly trivial programs. Every programmer uses a variety of tools. Usually compilers and interpreters of various invented languages. And then a variety of other tools to bundle up the programs, deliver them to where they will execute, and load them into the relevant computer. It’s immensely complicated. Most programmers haven’t much of a clue how it all works, just the bits they have to mess with. If you run into an “application programmer” or a “web programmer” you can be fairly sure they know almost nothing about how the underlying system actually works. There are many, many layers.

So there really is no “directly”, there are only various levels of indirectly.

And yes, the current direction is using tools that enable us to program a learning system that can learn to drive. Most important is that it can teach itself to drive through various techniques. The techniques are being refined and improved, as are the underlying learning approaches.

Nobody has yet figured out how to have such systems learn in just a few attempts, as humans seem to do. But the field is very new, effectively about ten years old, so huge improvements are being found all the time. That particular improvement, if found, will make everything much more effective, pretty much instantaneously. But there are lots of other potential improvements that will likely be found as well.

We’re currently close enough that just a few serious improvements will kick us into a different world. Turning a “gazillion” into a million would do that. Turning it into a thousand would be very scary. Might happen. What’s referred to as the “singularity” is when the learning machines themselves take over the engineering and figure out how to turn a million into a thousand. Some pieces of that are happening already.

-IGU-

1 Like

Is that true?

As you know, I’m a lay person. Though I do know that software designers don’t write in machine language and use a variety of tools. My prior comments only mean (perhaps expressed wrongly) that unlike a conventional computer program where the designers make all (or nearly all) of the design choices in structuring how the software works, with machine learning and neural nets the actual function of the program remains a bit of a black box. At least, that’s my understanding.

But as far as software, is it true that there are no equivalents to the “laws of physics”? I mean, certainly the hardware will impose limits on what can be done with the software. My understanding is that the most important limit is time - software can only run so fast in any given hardware system, which is why quantum computing might actually offer some advantages over conventional computing. There are also inherent limits even in a purely “mental” framework - we can’t square the circle using a compass and a straightedge, we can’t break a code that uses a one-time pad. And the “don’t know how yet” can still be a “can’t” over a relevant time frame or absent a major breakthrough in some other field, even if it is ultimately possible within the laws of physics. We can’t build an economically viable fusion reactor within the next five years, even if we might be able to build one within the next 50 years; mathematicians couldn’t prove Fermat’s last theorem in the 1800’s, because there were decades of work in a host of other mathematical fields that had to be developed before any approach to the proof was even possible.

I’m don’t think the use of “can’t” is properly limited only to “things that violate the laws of physics,” or that it doesn’t properly apply to things we “don’t know how to do yet.” If someone were to say that we could solve global warming if we just switched to fusion reactors instead of fossil fuels, it would be a completely appropriate response to say that “we can’t do that, absent a fundamental breakthrough in fusion technology.”

AI cars might fall within a similar category, and we might still actually be many, many decades away from being able to do this (if we can ever do it economically).

There’s a video on the Munro Live YouTube channel that puts the ultrasonic sensor parts cost to Tesla at about $200 (https://www.youtube.com/watch?v=LS3Vk0NPFDE).

-IGU-

1 Like

No, there aren’t software laws. Any limitations are all about what’s possible on the hardware in question.

Meanwhile, your examples all require a vague understanding of how things work, and mostly aren’t about software. A good understanding brings up caveats and exceptions. For example, we still have no way of knowing if there’s a proof of Fermat’s Last Theorem that doesn’t involve Wiles’s fancy modern techniques. I asked a mathematician I know about it and he went on for ten minutes about the assumptions involved and how to even go about phrasing the question if you want to be able to answer it.

So, no, it’s really the case that your understanding of software is insufficient for you to have a coherent opinion. If you want to avoid being “not even wrong” the only proper thing to say is “I don’t know”. And, really, I know only enough to confidently say that I don’t know either.

-IGU-

1 Like

Why aren’t those ‘software laws’?

I mean, this seems more a matter of pedantry than any real engineering principles. If someone has obtained a data file that’s protected by an 256-bit key (using appropriate protocols) and asks their IT team if they can develop a program to brute-force crack the encryption, the correct answer is “No, we can’t.” Sure, technically it is possible under the laws of physics that one day humanity might develop sufficient super-computing technology that is so far advanced from what we have today that such super-computer would be fast enough to brute-force crack that encryption. But the context of the question makes it clear that those ‘caveats and exception’ isn’t really what’s being asked. If a second engineer corrected the first one to say, “Actually, we shouldn’t say we can’t, but rather that we just don’t know how,” they would be obfuscating rather than clarifying. We can’t brute-force break 256-bit encryption - not because the laws of physics prevent it from happening, but because the type of computer necessary to do that type of thing just literally doesn’t exist yet.

It’s the reverse of the old joke about a physicist, a chemist, and an economist who were stranded on a desert island with no implements and a can of food. The physicist and the chemist each devised an ingenious mechanism for getting the can open; the economist merely said, “Assume we have a can opener!” Insisting that it’s wrong to say that we can’t do something, but rather that we’re simply lacking some instruments or knowledge which might exist under the laws of physics but are well beyond our current level of technology, is analogous to that. Can openers don’t exist yet, but it’s wrong to say we “can’t” open the cans but only don’t know how, because humanity might one day develop a can opener? Both true and utterly false - because implicit in the question is getting the cans open before we starve, not whether humanity might in the far future be able to open them.

You’re right that I know only a layman’s amount about, which is why I have been asking these types of questions. “Do we have the ability to develop a self-driving car?” is a question that depends a lot on software capabilities - which are determined by both our knowledge in how to create software and the hardware capabilities we create that software is on. Both are always improving, but there’s only so fast and far they will go in any particular time period. We’re not going to have quantum super-computing by next Wednesday (for example).

If machine learning programs can’t learn to drive, it might still be possible that some other type of computer that isn’t even close to existing yet might be able to do it. But that means that it’s more correct to say we can’t build a self-driving car until we have a fundamental breakthrough - not that we “can, but don’t know how.”

For reasons that I apparently am unable to explain to you. Sorry. I really am giving up now. Please stop it. Nothing you say amounts to anything more than that you don’t understand, which adds nothing to anybody’s understanding.

-IGU-

1 Like

Fair enough - I suspect that you don’t understand it either, which means that neither of us are getting much from this conversation.

1 Like

In the end, you can download new software, but you can’t download more MIPS or more RAM. And I suspect that the current hardware in ANY regular commercially sold vehicle will not be able to do true full self driving.

2 Likes

Yep. Barring some unforeseen discovery, this isn’t happening in my lifetime (~20 years, maybe 30 if I’m lucky). Not full AV without human supervision. No way. It’s far too complex a task, even if most of us make it look easy.

2 Likes

Seriously, you have no clue. Might happen next year, might happen in twenty. The nature of the “discoveries” needed means you have no idea.

And of course you’ll just say that that wasn’t foreseen. That’s not true at all. It’s truly easy to see that in a field as new, undeveloped, and wide open as AI that there will be many breakthroughs every month. It’s such that researchers can’t even keep up with the relevant literature.

-IGU-

1 Like

A lot depends of how much of the work left is dependent on discovery and how much is just new/difficult engineering work.
There is not a lot of discovery needed to identify another dozen or hundred edge cases for Tesla’s FSD or any other autonomous vehicle platform. (And does this get you to some arbitrary level of safety, like 99.9999%) The question is really is whether the “fix” needed for these is merely to add some good test cases to the training dataset, turn the crank and do a new release or whether a new AI model is needed that can accept all the old and new test cases and run on the same HW in the same time constraints.

Mike

1 Like

I’m happy to assume that it requires new discoveries and complete reengineering and training. The evidence is that for Tesla,. that will take no more than a couple of years.

So that’s worst case from when the new discoveries are made. Personally, my guess is that they’ve already been made,. but not yet found / tried by Tesla’s FSD team. After all, it’s been only five years since transformers (Transformer (machine-learning model) - Wikipedia) appeared, and that eventually changed everything that Tesla was doing (to some degree).

-IGU-

2 Likes

And neither do you. But the two present paths (Big Data and Machine Learning) aren’t going to come up to a human driver’s ability in 5 years. I’m not picking on Tesla (I know you get defensive).

You should really watch the program to get a dose of reality. Tesla -specifically- has made amazing strides, but autonomous driving without human intervention is not really on the horizon without some significant revelation. The program featured researchers who ONLY do this. It went into great detail about many of the problems any system faces. As well as the short-cuts that our brains take that the machines don’t seem to be able to master. As crappy as we humans are at driving, we’re still better than any AI.

1 Like

Which doesn’t mean we won’t have self-driving within 5 years, even if that’s true. We just won’t have Level 5 self-driving within that time. Waymo and GM Cruise are already offering Level 4 driverless taxis in very limited areas (geofenced portions of Phoenix and SF, Waymo getting ready to expand to LA). Where we end up might be a situation where instead of cars that can drive themselves anywhere and any time, we have “zones” where cars are allowed to switch to self-driving. That might end up being most of the land area of most “newer” cities (where the streets are wide and on a grid, and pedestrians are few) but not perhaps the rabbit-warrens of irregular streets based on colonial cart paths swarming with pedestrians like the central business districts and historic areas of old cities.

It is important for developers to not get carried away with what they are able to do with machines. Humans are much better at some things but we just get bored, distracted or tired and convince ourselves we are OK. Machines don’t do get distracted or tired, but are still not good at medium to long range anticipation of events that might happen, just as one example. So, although they have a much quicker reaction time, machines appear slow to react since they don’t (yet?) predict possible problems.

For example, if a human sees a car changing lanes ahead that might be in another car’s blind spot you might take your foot off the accelerator, slightly slowing but also being more ready to hit the brakes. Autonomous cars don’t do this because, maybe they can’t do that prediction yet, but they also don’t need to be more ready to hit the brakes, they can do it faster than a human. The overall experience in the car makes it seem like the AV isn’t paying close attention because that is what we’d expect from a human.

Mike

Yes, the program mentioned weather also (with regards to sensors being blinded). I see Waymo vehicles in some areas of Phoenix. I even saw one at a charger at the Chandler Mall. When they can see the lines, they do well. I was referring to level 5 (i.e. no human supervision). We are already level 4 in some special circumstances, as you say.

The AIs in the program were mostly machine learning, and they had difficulty differentiating pedestrians from other objects in several circumstances (e.g. a delivery driver was carrying a pizza box). It was still impressive what it could do, but it’s deficiencies were very apparent in what it couldn’t do.

You may recall I had visions of an end to truckers as AI took over, followed by personal vehicles as it was refined. Seeing the NOVA program changed my mind. The problem seems easy, but per the researchers on the program, it is extremely difficult. We don’t realize all that our brains are doing in the background that make driving easy for us.

1 Like

Look, it’s nice that seeing a TV program changed your mind. But you started off knowing effectively nothing about machine learning and AI. And things are changing so quickly that anything more than a few months old is seriously out of date.

So you don’t know, I don’t know, even the experts don’t know which particular improvements will be the ones that matter. They, and certainly we, have no idea how hard the problems will turn out to be and how long they will take to solve.

So making statements about how long it will take requires hubris and ignorance.

All the cool kids today are using transformers in their ML programming. Here’s the seminal paper, all of five years old. Give it a read and see if you know any more when you’re done. There are lots of explainers out there if you find it too full of jargon.

-IGU-

The program aired 3 years ago. Yes, in terms of technology, that is forever. I do get that. I don’t know how long it will take. I’m just saying longer than I’ll be on this Earth. Our daughter might see it. Maybe. Or it may never work with silicon processors, and would require something completely new. Dunno.

It’s that last bit that’s the tough part. Getting where we are today was easy, relatively speaking. Interesting that the article talked about parallelism. That’s how our brains work, in general terms. I’m still betting I’ll never see it (true level 5) in my lifetime.

The first sentence is true.

If it happens before I die, I’ll submit a post that you were right and I was wrong. I’m betting I won’t have to submit that post. (Assuming TMF is still here in 20 years.)