Technology readiness levels

Waymo keeps running into things and now NHTSA is investigating. Fred proposes a graduated licensing process for autonomous vehicles. 60 minutes Australia compiles a list of Elon Musk stating how self driving is ready next year but that next year never comes… maybe he’s just a salesman? Tesla’s automation software has been involved in 467 crashes, 13 of them fatal… and that is just in a 15 month period. Keep your hands on the wheel and your eyes on the road.

This weeks links:

Subscribe using your favorite podcast service:

Transcript

note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

Anthony: You’re listening to their auto be a law, the center for auto safety podcast with executive director, Michael Brooks, chief engineer, Fred Perkins, and hosted by me, Anthony Cimino for over 50 years, the center for auto safety has worked to make cars safer.

Good morning. Hey, good morning, everybody. Waymo. Let’s start off with Waymo. Waymo, we’ve talked about a lot in the show. They’re an autonomous driving taxi service funded out of Google slash Alphabet. And we’ve talked about them and said, Hey, they’re better in ways than the other lesser autonomous robot taxi companies like this.

And the reason we’ve said they’re better is because they’ve been more open about their data, to an extent. They’re a little more forthright in saying, Hey, this is the data we’ve collected. And it turns out They’re just as crappy as everyone else. From the Associated Press, in the past month, NHTSA has opened at least four investigations of vehicles that can either drive themselves or take only some driving functions as it appears to be getting more aggressive in regulating devices.

In this probe of Waymo, which is once Google’s self driving vehicle unit, the agency has said it had reports of 17 crashes and five other reports of possible traffic violations. No injuries were reported and these things can still stay on the road. Whereas if I had one traffic violation and I had to sit through an eight hour driver safety course.

Can we make them sit through driver safety course?

Michael: No, there’s really no way to do that in federal law. Perhaps, the states have, which have some Regulation could put them through a safety course, but that’s probably gonna take a lot of movement in Arizona or California. And this investigation, you know, there are no, life threatening or severe injury crashes involved here.

It is a, a look at a lot of different scenarios. Look, mostly look like collisions with Objects that aren’t moving gates and chains are cited park vehicles. Some of the more even more concerning things. Maybe, they’re not really obeying traffic signals appropriately. It’s not really clear whether that stoplight stop signs.

And what we saw, we pointed out or found a post a couple of weeks ago that identified a way mo driving the wrong way down the road. Yes, this is the unicyclist, right? It appears that caught the attention of the agency before issuing that because they cite, vehicles driving and opposing lanes with nearby oncoming traffic as one of the incidents that’s involved here that wasn’t reported by Waymo to NHTSA under the standing general order.

We’re going to see what happens here. It’s appears lately to have been, spurred to really look into what these companies are doing, when we talked about cruise back in October and all the things that went on there, I think that was a bit of a red flag planted in the ground to regulators at the state and federal level that they need to be doing a little more due diligence and ensuring that these companies are accurately reporting things and identifying safety hazards appropriately.

And maybe this is. A part of that. This is something that, that, that says now broadening its investigation. We see now that we may take over this later. They’re looking into Zoox and they’re looking into Waymo and they have heightened some of their investigation to Tesla as well on some of its claims to be autonomous, even though those vehicles are not.

Anthony: So one of the things that Waymo does a lot, and it always sounds really good, it says, hey, after, 7 million miles that our vehicle have driven, we’ve only had 22 crashes, and they claim that they’re four to five times safer than a human. And all of that sounds, I don’t know, that sounds pretty good, right?

Fred, why is this not pretty good?

Fred: The devil’s in the details, as always. And the AVs so far are limited to very safe, Operating circumstances for the most part and very low speeds. So the experience that they’ve had at driving at low speeds and benign circumstances, mostly in Arizona, where you rarely have bad weather and you have carefully grid marked streets is really not relevant to a general population.

And it’s important that we don’t normalize this behavior, which is actually thousands of times more dangerous than human driver human driven cars by saying that this is fine. These are growing pains. This is okay. We’ve got to do this because there’s a wonderful utopia out there.

That’s going to be Jetsons for everybody and Rosie driving the car. But numerically, I guess I should stick to that, they’re much less capable than human drivers in circumstances that they know and incredibly bad in circumstances that have not been programmed and have not been put into the memory banks of the computers driving the cars.

So any assertion that they’re, X times safer than human beings Elon Musk would say 200 percent safer than human beings. It’s just patently false.

Anthony: One of the most cynical arguments I hear is people saying that elevators used to require a full time operator because elevators are so dangerous.

And now we’ve automated elevators. And so cars, just some people are going to die until we have them as safe as elevators. And it strikes me as an incredibly dumb. Dumb thing because elevator operators weren’t there necessarily because it was complex It was because this is something new and scary to people He wanted yeah, look, I know what’s going on here.

And this is a luxury experiment. I don’t want you to press a button

Fred: That’s like saying brighter is safe. So I’m gonna go ahead and let you drink snake venom, you know

Michael: This is just crazy. Elevators are moving one direction or two directions, up and down, and they’re not dodging other elevators or humans or, all other sorts of things, animals potholes in the process.

So that’s somewhat silly. On Waymo there’s, there is something important here that kind of extends beyond Waymo, NHTSA received a number of these reports through the standing general order, which is essentially a requirement that manufacturers of level two and above vehicles report incidents, potential crash incidents and other problems to the federal government.

The. And I’ll note here that, we talked about that incident with the unicyclists a couple of weeks ago, and that didn’t come into that didn’t come into the government or to NITS investigators through the standing general order. It is, it happened to be caught on camera and was posted to some Internet site and, the agency obviously stumbled upon it, but, it raises the question, of how many incidents.

are being missed by the data requirements for reporting under the standing general order. Does that order need to be expanded? We already know that, that it needs to be made permanent. Right now it is, it’s not a rule. There wasn’t a rule making or that came out to, to make it a permanent fixture over.

And it’s a the next administration could come in and dump the entire thing out. And then the agency wouldn’t be getting any of these reports that they’ve used in a lot of areas. They’ve used in the Tesla investigation and the cruise investigation, and you’ll see the standing general order data cited in a lot of investigations involving automated vehicles.

So that’s important to note, the standing general orders is very important and yet it’s at the same time subject to political whims and there’s some improvements that can be made to it, to make sure that we’re getting accurate data.

Fred: There’s another problem with the standing general order, which is that it’s essentially a sleight of hand by the industry to say that we are going to normalize the operation of self driving vehicles, and, we’ll tell you how bad they are, but we’re not going to try to assure that they’re safe ahead of time.

Let’s just go ahead and conform to the standing general order, and that’s perfectly fine. That normalizes this behavior. That’s very dangerous. The cars have never been proven to be safe. By any standard the companies routinely refuse to define what safe even is. Normalizing this behavior is really a bad idea.

Anthony: You mentioned Cruise, so we have to give a little update on Cruise from their own blog. Starting this week, Cruise AVs will begin supervised operations. Autonomous driving in Phoenix as Cruz takes further steps towards returning to its driverless mission. So for those of you playing the home game Cruz used to not have supervised drivers in their car And then they ran over a human being dragged him and lied about it And then this little guy named Kyle got to resign instead of being fired We’ll get to Kyle in a second.

So this is Cruz saying hey, we’re Putting safety drivers back in the car. And remember before we said we didn’t need safety drivers? Ha! We lied and We’re gonna put them out in Phoenix, and as Fred pointed out that’s a lot of straight lines, wide roads, Weather consistent situations, so Hey, good on ya, GM Cruise!

I don’t have anything else to

Michael: add. That’s all you’ve got.

Anthony: We’ll get to Kyle in a second.

Michael: The, they’re essentially starting back up again in Phoenix, and there’s not a lot to say here. Cruz is going to really have to do a lot of work to prove that they’re taking a new path. They’ve come out recently, and And, they’ve installed some new people at the top of the chain over a cruise and they’ve been, touting safety touting, that, once again, asking the public to trust them as they begin operating again this.

It’s good, that they’re going to have supervised autonomous driving rather than turning the vehicles loose on their own after the problems we saw in San Francisco. Having someone in the vehicle that can monitor the situation probably prevents you from, running over fire hoses and interfering with police officers, running into wet cement and, all that list, litany of things that crews encountered in San Francisco, a supervisory driver.

That’s a human doesn’t ensure safety by any means. That human is still subject to some of the same problems that we see in, in, level two vehicles where, you know the driver can become complacent. The autonomy can essentially trick the driver into believing it is more powerful than it is.

And that driver can essentially be taken for a ride and in a death trap that they’re unaware of. The. There’s not much here to say other than folks in Phoenix, need to be on the lookout for the cruise vehicles and making sure that they’re not, causing some of the problems that we saw in San Francisco.

Phoenix is familiar with them anyway, because they’ve had rainbows running around there forever. And so they’re, the folks in Phoenix or, as Fred says, driving on straight roads and good weather they’re used to seeing autonomous vehicles out there because it’s one of the easiest environments to test in the United States.

They were so,

Fred: sorry, I lost the thought

Anthony: back to

Fred: you.

Anthony: Okay this is completely not related to auto safety, but it’s just too entertaining. Okay, so Kyle Vaught, the guy who is the CEO of GM Cruise me and him, best buds, and he now after he got to resign, you know, which is the rich white guy version of being fired he now is starting a home robotics company to take care of those pesky chores that you don’t want to do So I’m expecting, Rosie from the Jetsons to eventually kill Astro and for him to lie about it.

But hey, that’s just me.

Michael: Yeah, I’m still trying to figure out what robot I really need around the house.

Fred: I have a Let me, I got my thought, let me get back into that, okay? Alright. Good that they’re gonna have humans driving their car, but there’s no path between human experience driving a car And AI capability of handling a situation that they’ve never been programmed to do.

This is, this is exactly the problem with self driving vehicles. They are incapable of reason and projecting in the same way that a human can from their experience into an unanticipated reality. Situation that challenges the safety of the, the people in the car, the people around the car Missy Cummings.

Who is at George Mason University talked a lot about this based on her extensive experience inside of the Department of transportation reviewing. AI proposals, AI technology, as it was applied to self driving cars. I really highly commend that presentation that she made to the IEEE in Ireland. It’s accessible on the internet.

Everybody should, everybody who’s interested in this sector should really watch that because it’s very revealing about what AI is really all about. The challenges that it faces and the challenges that we face due to over reliance on what’s inappropriately called AI. Because it’s really not AI. It’s really not intelligence.

It’s merely correlation. And there’s a huge difference between correlation of previous experience and projection onto unanticipated circumstances. AI cannot bridge that gap. Through any technology that’s currently out there, generative AI stinks. People have, there’s lots of experiences about generative AI writing stuff that makes absolutely no sense.

And coming up with solutions simply don’t exist. It can’t be supported by any documentation. This is fine when you’re writing a legal brief and a judge can crack your knuckles or, presenting a crappy brief, but it’s a different situation when you’re Projecting inappropriately in a situation where you’ve got a two ton vehicle approaching pedestrians who are incapable of defending themselves.

Please everybody go do that. We left the link on our website, right? And right, Anthony, I think

Anthony: it’s linked in last week’s episode description, but for you, I will link it in this week’s. Description as well Fred, but based off what you just said, so I take it you are no longer applying for a job as a safety driver for Wayne or cruise in Phoenix.

Fred: I tried, but they think I’ve got a sad amount of experience. And disqualifies me.

Anthony: Hey listeners, now’s a good time. Take a deep, cleansing breath. And go to autosafety. org and click on Donate. And then after you do that, continue that deep, cleansing breath. I don’t know, inhale, exhale in between or something.

I don’t know, is cleansing breath just in or out? I don’t know. Michael’s doing it right now. It’s disturbing me. And share this with your friends. Tell them, hey, guess what I learned today? Artificial intelligence is more artificial. It’s the NutraSuite of intelligence. NutraSuite? I don’t know what I’m babbling about.

Who else doesn’t know what they’re babbling about? Come on. Any guess? Any guess? Anyone? Anyone? Come on. Going with Kyle? No, not going with Kyle. Elon. Yeah, Elon Musk. There we go. 60 Minutes Australia, which is just like the U. S. version of Australia, except the clock runs backwards has done what I’ve always wanted to do, but way too lazy.

They have this great piece on Tesla and their safety problems, and what they did is they combined all of the clips over since 2015 of Elon saying how autonomy is happening. And so I just wrote these down. In 2015, he said, Self-driving self-driving is a solved problem in 2016, less than two years away from complete autonomy, safer than a human 2017, still on track to go from LA to New York by the end of the year, fully autonomous 2018.

By the end of next year, autonomy will be least 100 to 200% safer than a human. We’re going to come back to that one. 2020, extremely confident of releasing full autonomy next year. And so they also looked at a 15 month period from 2022 to 2023. And in that period of time, this is what NHTSA has gathered.

In that 15 month period, there was 467 crashes, 13 fatalities involving Tesla’s autonomy software. Ah, fuck that guy. Is that fair to say? So let’s just jump back to what he said in 2018. By the end of next year, autonomy will be at least 100 percent to 200 percent safer than a human. That is the statement of an idiot.

That is the statement of someone who’s never taken basic math. Any sort of engineering class. Finished the third grade. This is the dumbest thing I’ve ever seen in my life. Keep in mind, he has a degree in economics. Which makes me think maybe economists are full of shit too. Gentlemen, am I being overly dramatic?

Fred: I’ve always been puzzled by the absence of correlation between intelligence and wealth. Do you know this? Clearly I should have studied bullshit in school rather than engineering and physics. I’ve really missed a, I really missed a step here.

Anthony: Yeah, I agree.

Michael: That’s the more money than brains phenomenon.

I don’t know how anyone could make a statement like that or a series of statements over five years that are clearly, bullshit. How does he not know that this stuff isn’t working? Especially as far back as 2015, it’s clearly just a sales pitch. It’s meant to attract investors.

Investors are easily led around as we see, by Kyle starting a, getting 150 million in startup funding for a robot company with no product yet. So there’s a lot of money out there that’s floating around trying to find the next big bang and investment. And it’s willing to take risks and it’s, in many cases, it’s willing to take risks that impact the rest of us, which is exactly what we’ve seen with autopilot and now full self driving, promising a lot, but ultimately luring owners into and the drivers into believing that line of nonsense and luring them to an early death.

So it’s one reason why we’re really hopeful that this is finally going to wrap its arms around Tesla and squeeze them really tight with this recall query that they announced a couple of weeks back, where they’re really looking into why Tesla didn’t fix autopilot in the recall that they performed and, it’s.

Elon’s not stopping. He’s still saying the same, singing the same old tune. When it comes to full self driving, he’s still continuing to claim that, all of your Teslas out there are going to one day be robo taxis and that’s. That’s complete nonsense. You didn’t hear it here first, but it’s complete nonsense.

And anyone who’s basing their investment strategy on that should pull out now. ARC investments. I’m looking at you because you guys are complete robo taxi and autonomy homers when it comes to Elon Musk. And I keep seeing articles from them that are touting. Tesla’s as robo taxis and Tesla’s being ahead of Waymo and these other groups that are actually trying to solve the real problems with full self driving and it’s disconcerting to see money supporting bullshit that way.

And it’s, it’s America.

Anthony: So I’ve pointed out and I, I should be ashamed to admit it, but I’m not. That, 2015, 2016, 2017, I thought, the auto industry is heavily regulated. This guy must be telling the truth. I fell for the whole thing, because like, how could you get all of this money?

How could you work in such a heavily regulated industry and make up this nonsense? Ford’s not going out there being like, hey, our cars, run off of dolphin tiers. Dolphin tier fuel will be ready next year. That would be insane. And essentially, that’s what he’s saying. Self driving, in 2015, saying self driving is a solved problem.

It wasn’t even a defined problem yet. It’s still, there’s so much of it that is undefined. It’s more of self driving is a problem. Not alone in this. An article in Reuters, U. S. prosecutors are examining whether Tesla committed securities or wire fraud by misleading investors and consumers about its electric vehicle’s self driving capabilities.

Three people with the matters told Reuters. Tesla’s autopilot and full self driving systems assist with steering, braking, and lane changes, but are not fully autonomous. While Tesla has warned drivers to stay ready to take over driving, the Justice Department is examining other statements by Tesla and CEO Elon Musk suggesting its cars can drive themselves.

And I gave five examples of him saying that over the years.

Michael: Yeah there’s, this is, this is good news. It’s late, we have been approaching regulators since the mid 2010s, 2016, 2015, and telling them, this is, It’s a line of BS and it’s going to get people killed. And now that it has, we don’t see, the safety authorities really, it’s seven years now.

It’s taken forever to get even to the point where NITS is saying, wait, your recall and autopilot is bad. The securities exchange commission has apparently under the radar here somewhat, we saw a report about a year and a half ago, I think it was as the article notes, October 2022 that there was a criminal investigation into tesla at the time our thoughts were great And this is finally going to levy some civil penalties That are big enough to convince elon to change his mind about all this stuff.

He’s spouting. That didn’t happen That’s not what apparently is happening. It’s the sec targeting tesla And it’s they’re literally trying to do, you know They’re trying to find situations where Tesla’s committed wire fraud or securities fraud on investors, which I would certainly agree that they have.

The interesting part of this to me is that, this fundamentally is a safety issue that’s killing people and you’re, The criminal investigation here involves the securities and the money side of things. Where, where’s the justice for the people who have been killed in or by these vehicles?

That doesn’t seem to be forthcoming from the federal government, not at least at this point.

Anthony: Yeah, so with that, Fred I, the more time we talk on this show, the more I try and figure out what really the point of self driving is besides some, the 12 year old boy in me’s fantasy of This is what self driving should be, and this is how cool it would be.

But let’s get into the TAO today, and let’s discuss the issue of graduated licensing.

Fred: You’ve now entered the DAO threat. Sure. All right many of our listeners, and I think both of you, and have a driver’s license, right? And let me ask a question. When you wanted a driver’s license, did the government say, okay, here’s your license.

You’re welcome to go out and drive on the roads as much as you want in any circumstances, and Yeah, we hope you don’t kill anybody, but we will assess your behavior and look at the number of accidents and collisions you get into after the fact and determine whether or not it was a good idea to let you drive in the first place.

Is that the process that you went through?

Michael: No, I went, I had to wake up in the mornings in the summer and go to high school, which was no fun and sit in, a driver’s ed class for four hours, then have lunch. And then the afternoon go to a trailer. down the road where they had all of these little basically driving simulators set up where you had a little movie screen in front of you and you were driving but it was cool really yeah we’re talking about 80s here and that was when i was 14 and a half because in Mississippi then the driver’s license age was 15 and you were able to qualify for your learner’s permit i believe at 14 and a half and then get your full license at 15 even in the small state of Mississippi, we had some, fairly significant hurdles to get through before we were able to actually get out on the road in the car.

And I drove a lot. I drove around a lot with my parents in the vehicle. During that learner’s permit phase before I turned 15 and was able to get my freedom machine and do whatever I want, which is probably a little too young. Seen as how I scattered some speeding tickets and other issues during that first year to

Anthony: see in New York, it was we had to You had to I know this answer from Phil Kopman.

I had to prove that I was human. Ha. I had to show my birth certificate at the DMV, take the written test, which I remember being very easy, and then you had to go sit in a classroom in a strip mall for five hours at night. And watch five hours of safety videos where a guy with a con driving a convertible from the 1960s with the top down would say how to show you how to adjust your mirrors to avoid blind spots.

And I’m like, yeah, you don’t have a top on this there’s no blind spots! What do you mean? This is bullshit! And then you had to take a road test, and it ended with the woman saying, you barely passed, you know what that means? You said, yeah, get out! Ha! And then for the first two years, they wouldn’t, you were not allowed to drive after 8pm or 9pm.

If you did, you had to have another licensed driver who was at least 18 years or older in the car with you. There was a whole bunch of restrictions.

Michael: Yeah, there’s some states that have, you can only have, an X number of people in the car at the same time. I think a lot of that’s the data that shows that the more kids, young kids, you’ve gotten a vehicle, the, the more, the higher the chances of a traffic incident.

Fred: So my experience was similar. And I think in general. States require you to have a graduated licensing program, so typically you start with a learner’s permit, some training, you demonstrate to an authority that you’re able to do the things that are required to get your learner’s permit, and then get your driver’s license, and somebody Checks you and make sure that yes, you are human that you’ve got the requisite skills, but still, they don’t quite trust you to have the requisite skills on the 1st day.

They say we’re going to give you a provisional license. We’re going to let you do a lot of the things, but there’s some things that are too hazardous. We’re not going to let you do those things until unless and until you prove that you can actually handle this lower set of requirements.

Safely. There’s nothing like that in the world of AVs. What the AVs do is they go to state legislators, for the most part, and NHTSA for the most part, and say, the future is going to be great, but in order to get to the future, we’ve got to just get out on the road and do things. We’re going to get some experience, and this experience is going to make everything wonderful, and in the future, everything will be better.

But as we’ve discussed, and as we’ve learned, there’s no way to do that. The AI Is never going to provide the requisite experience space that’s needed for computers to drive a car. It’s just not going to happen unless and until there’s a graduated program that says, okay, we’re going to give you a limited set of circumstances.

See how you do. And then if you do that, we will gradually expand the approved. Operational design domain or the area in which you can operate, we’re going to make sure that you don’t operate outside of that. And within it, you’re going to be safe. And as you’ve demonstrated the capability of doing that, we will gradually expand the horizons of what you’re allowed to do.

So this is a graduated. Concept graduated licensing concept that has applied to virtually every human being who’s ever driven a car since what, 1920 or so when they started driver’s licenses. Michael, I think something like that. So about a hundred years and it’s been a successful system. Not perfect, but a successful system.

So it’s a good idea, but how do you get there from here? Okay. What are the steps to doing that? We at the center for auto safety have proposed a graduate licensing program in which a panel of experts would look at progression from. One set of circumstances to another and make sure, as best humanly possible, that the cars are able to handle a limited set of circumstances before you approve them for a more extensive circuit set of circumstances.

For example, if you can prove that you’re navigating the streets of Los Angeles. Appropriately, then maybe you would go into the streets of Boston, which are a challenge even for human drivers, even for safe human drivers with an experienced driver in the driver’s seat for some amount of time before the authorities would say, okay, we’ve now got the experience.

To do a human evaluation of your safety responses in this new circumstance to determine whether or not it’s safe to proceed and to open up that operational design domain. This is humanly possible. It’s done in a lot of. Circumstances, in fact, the Europeans are planning to do exactly this with their let me look that up with their with their mission implementing regulation.

Laying down rules for the application of regulation of the European parliament as regards uniform procedures and technical specifications for the type approval of the automated driving system of fully automated vehicles. That’s a mouthful. We’ll put the links up on the on our website as well. So you can get that.

But the key step here is that they acknowledge that there’s a reason for a type approval of the automated driving system, the type approval to those who aren’t familiar with it means that you look at you look intensively at a given design and a given configuration. And determine whether or not, humans determine whether or not that it is reasonably safe to put this out in the public domain.

Now this regulation by the Europeans, which is still in draft form, it has not been accepted yet, goes into a lot of detail about. The things that you should do, the things that the vehicle should do. It looks an awful lot like the consumer bill of rights that we’ve put in place for consumer AV bill of rights, excluding the parts about protection of intellectual property, protection of your credit cards and all that sort of stuff.

So it’s just technical, but for example, it says verification validation by the manufacturer. Of the performance requirements, including the OEDR the HMI, human interface, the respective traffic rules, and the conclusion that the system is designed in such a way that is free from unreasonable risks for the driver, vehicle occupants, and other road users.

That sounds like the bare minimum you would want to have for a vehicle. They go on to say, as part of the DDT, the ADS shall be able to basically do everything it is supposed to do, including activate relevant vehicle systems when necessary and applicable, activate wipers, open doors, shall demonstrate anticipatory behavior.

Meaning that it can look at the situation in front of you and respond with low dynamic longitudinal behavior, left and forward and back, and left and right, and risk minimizing behavior when critical situations could become imminent. That seems like a reasonable thing to do. So go ahead. I said,

Anthony: it

Fred: sounds

Anthony: good to me.

I’m curious. I’m going to jump in real quick. So we started off this episode talking about Waymo and how they were under investigation and they were primarily under investigation for their fifth generation. Software. So I have no idea what generation they have now. So with this graduated licensing, so if I’m, when Fred, you and I, we create our self driving car company in Florida, cause there’s no rules in Florida.

We come out and are, Generation 1 gets this graduated license, passes it and then we come out with generation 2 is, will that need to be certified again?

Fred: Yes. Because it’s a new driver, right? I agree. It’s a new driver, and every new driver needs to be certified. Needs to be certified to be safe, reasonably safe and efficient.

So with this,

Anthony: I’m sorry, one more question related. So would this prevent these these companies from saying, Hey, we just released an over the air update. No, we didn’t check it. We’re just sending it to you. Cause in this case, they’d have to be like, Hey, we have an over the air update and it’s gotta go through this testing process to make sure it can drive.

Fred: One of the things that is included in this European regulation, again, in draft form, but it. Has a lot of great stuff in it is a section on verification validation. It says verification and validation by the manufacturer. Including that I’ve already read this actually by the manufacturer, including the et cetera, et cetera, et cetera is object event.

Detection

Michael: and response. Detection and response. Detection and response.

Fred: So that is an overarching metric that describes the behavior of the system, including the computers, including the sensors, including everything that is involved in whether or not the vehicle responds appropriately to the circumstances that it senses.

So every time you change the software, You are changing potentially the OEDR, because that’s the heart of what makes the OEDR characteristics observable. So if you go from generation X to generation Y, you need to requalify that, you need to do the verification and validation. Now there are ways of doing that incrementally, of course is, it’s been around for a long time.

But I don’t think that these are unreasonable in any sense. And I think that there, that the. Automobile companies are quite familiar with them. In fact, there’s, there are parallel circumstances or parallel observations called technology readiness levels. And at each of the moments when you are looking at the ability of the car to be operated safely, You should also look at the technology readiness level to say, okay, is this something that’s just pie in the sky or is this something that’s ready to go?

So I’m going to give you 2 guys a quiz there. The general accountability office of the US government has come up with guidelines for technology readiness levels. And by the way, when these were developed, they had a participant from Waymo. They also had several participants from the Department of Transportation.

So it’s, nobody can say that these are unknown to the industry. Nobody can say that they’re unknown to the Department of Transportation.

Anthony: Now, this is an unfair question, but how many of the participants from the Department of Transportation showed up with their resumes asking for jobs at Waymo?

Fred: That I don’t know, but we do have their individual names.

So we could find that out if we had the energy to do that. But that’s not where I’m headed right now. So with the technology readiness levels, there are 9 levels starting with from number 1, which is basic principles going to number 9. The technology is refined and adopted, meaning it’s been accepted, it works, and we’re just updating it, so that’s fine.

Now, especially given what we talked about earlier, the number of collisions and accidents and vehicles going up the wrong way down one way streets and the wrong side of divided highways and all that, I’m going to ask you if you think that this is that these characteristics are satisfied. Is the operational environment, user community, physical environment, and input data characteristics And appropriate as appropriate, is it fully known for any of these vehicles, Michael, yes or no, I would say no, Anthony, I’m gonna agree.

No. All right. Was the prototype tested in a realistic and relevant environment outside of the laboratory? Yes or no, Michael?

Michael: Wow, that’s a really tough. How do you find a realistic environment that represents everything a vehicle might encounter on all roads in the world?

Fred: That would actually be an argument for testing it on the roads, I think.

Tested in a realistic and relevant environment. It wouldn’t have to be, but it could be.

Anthony: Is my realistic environment just restricted to Phoenix, where the roads are all perfectly wide and straight and the weather’s always sunny?

Fred: Good question, but it does beg the question of whether or not The manufacturers should be required to specify the appropriate environment before they go out on the road, doesn’t it?

Yes. Next question. Does the prototype satisfy all operational requirements when confronted with realistic problems? Thinking of the cars crashing into fire trucks and going up the wrong way. On various streets. Would you say this is a yes or a no? No, that’s a no. That’s a no. What you’ve just done is you’ve said that these vehicles are being deployed by the manufacturers with approval of various governments when they have not reached TRL 6 or Technology Readiness Level 6 as specified in the GAO for the Department of Transportation Technology Readiness Levels.

Again, there are nine levels. Six is the prototype system demonstrated in a relevant environment. We would say, based on what we know now, that it’s only a level five, which is integrated components demonstrated in a laboratory environment. I think it’s safe to say they did that. The external and internal systems documented, yes.

Any, our target and minimum operational requirements developed. Thank you. I’d say, yes, there’s a component integration demonstrated in a laboratory environment, i. e. fully controlled setting. I’d say, yes. Really, all of these that are on the street now are only technology readiness level 5 based on the Department of transportation technology readiness levels.

I want to emphasize again that Waymo was a major participant in development of these standards, and none of the other automobile companies were. Based on the document itself, and there were five people from the Department of Transportation who also made a significant contribution to this. My opinion is that if you’re not going to have a graduated licensing program that uses technically qualified experts to determine whether or not these systems are offering reasonable safety, then you should rely on the technology readiness levels that have been developed and used and are in fact specified for the Department of Transportation in your determination of whether or not These vehicles are safe to release on the highways.

And at technology level 5, they are not. Now, at technology level six, there’s a question of whether it should be, but technology level seven is the prototype demonstrated in an operational environment. What the companies are doing is they’re jumping over the TRL six, they’re going right to technology, the technology readiness level seven, while claiming that they are qualified to be technology readiness level eight.

Now, technology readiness level eight says, are all system components form, fit and function compatible with each other and with the operational environment? I think clearly not, but that’s the claim that they make when they say they never drink, they never lose their attention. They never masturbate. I don’t know what the hell they never do.

Is the technology proven in an operational environment? Now they say, sure it is, because here we are driving down the streets of Los Angeles and we’re killing hardly anybody, so that’s fine. Was a rigorous test and evaluation process completed successfully? Oh yeah, we’ve proven that they never drink.

We’ve proven that they never lose attention. Everything’s great. Does the technology meet its stated purpose and functionality as designed? Sure! We have these, we have them out, we’re delivering pieces, we’ve got X millions of miles, everything is great. I, again, I just want to make the point that there is, in the world, a rational process for evaluating systems that are going to be used in the public domain.

They’ve been approved by the Department of Transportation, accepted by the General Accountability Office. They were 1 of the manufacturers, 1 of the major AV manufacturers that participated in the development. But they are not being followed, and they’re, the fact is, based upon reported information, that they’re at a technology level readiness of five versus what they would need to be technology readiness level seven, and there is no process in place in any legislature that I’m aware of to hold their feet to the fire and say, look, you guys got to do what you know how to do.

You’ve got to develop this in a manner that’s safe, that’s been established, that is supportable, and that is in conformity with. Development standards for every piece of technical equipment that’s been produced for use by the US government or related agencies for the last 35 or 40 years. It’s not happening.

We really need a graduated licensing program and needs to be tied to the technology readiness levels. And this should be implemented at both the state and the federal levels before approval is granted for these dangerous vehicles to be put out on the highway.

Michael: I’ll add to that, that last part by Fred is really important, more so than you might think, because, right now, graduate, graduated licensing, as it implies to humans, is sheerly a state based phenomenon.

Every state has its own rules. There are better states, there are much worse states, and when it comes to autonomous vehicles, it could be argued that California has a mini graduated licensing system. You saw, you see how the public utilities, or we’ve talked about how the Public Utilities Commission there and the DMV.

Sort of work together to allow, Waymo or GM crews to expand their territory over time. And we see inevitably what happens when one of those one of those manufacturers crews gets into trouble with those authorities. It immediately leaves that state for Phoenix or, some companies might choose Texas.

So if you’re going to have an effective system here, it seems like it needs to be implemented from top down from the federal level. Even though that’s not what the feds are used to doing, right? The federal level, they don’t do driver’s license. That’s state thing, but in the case of autonomous vehicles, if you don’t have a top down approach, then you’re going to have.

Migration of these companies to states where there aren’t any rules. So that’s just, I think, an important note at the end of that segment.

Fred: And I think it’s important to note here also that senators have recently sent a letter to NHTSA saying, look, you need to do a vision test. You need to do better.

What they failed to do though, is they failed to say that you need to tie. Your approval of these avs to accepted known standard programs for development of technology that endangers life or limb of the public or the people who are using it. We would like to see them head in that direction.

Michael, do we have any gold bars in the, cAS inventory that we can give to senators to get them motivated. That seems to be what people are doing these days.

Michael: I’ll have to check. We have some really heavy file cabinets that I haven’t looked into lately. So I’ll get back to you on that.

Fred: Worth checking.

Worth checking.

Anthony: Probably got those secondhand from the GAO. Ah, those file cabinets. So I got to ask, these technology readiness levels sound like a great thing. It sounds like a smart approach, but if there’s nothing being done with them, what was the point of it?

Fred: They’re being done excuse me, they’re being used extensively throughout the government for development of, yeah.

Anthony: In this specific case for the AVs, what, why, like, why did Waymo even just show, is it just a PR thing?

Michael: The GAO is they are typically more, Doing research and background looks at how the government works and, what federal agencies are doing. They don’t have any particular little rulemaking authority that could force the industry to comply with the readiness assessment, but they can.

Devote time and effort to producing a document that documents what a readiness is, what a technology ready assessment, readiness assessment would look like and how it functions, which is essentially what that document would, and what the GAO has done here.

Anthony: Okay. Hey guys, I think I’ve had enough talk about AVs today.

We can keep talking about AVs, because we’ve got more than enough information. We can keep going, but let’s change gears a little bit. Let’s take a little, another deep cleansing breath, and let’s talk about an article from the National Insurance Crime Bureau, which, up until five minutes ago, I didn’t know was a thing.

And it’s my new favorite place to go. That’s right, the National Insurance Crime Bureau. And they have a report, the 2023 most stolen vehicles. Bum! And, if you’re a fan of this show, if you’re a listener to this show, or if you’ve just turned on the news you can probably guess who makes the top ten.

Three spots. And a, has it, both of you looked at this list already? Can you guys play the home game? I know myself that I’ve

Michael: read. Have you looked at it? I’ve definitely looked at it because they’re, I’ve looked at it. Do they

Fred: get a group discount for having three in the top 10?

Michael: Oh, they’ve got six in the top 10, actually.

Anthony: So this was TikTok fan favorite. Hyundai Elantra. 48, 000 plus thefts in 2023. Followed by the Hyundai Sonata. Followed by the Kia Optima. And those three are all affected by that TikTok generation hack, right? The Kia is affected by that as well?

Yeah.

Michael: And then the byline of this article is sedans bump full size pickups from top spot. It’s not sedans bumping full size pickups from the top spot, it’s Ikea sedans bumping the full size pickups from the top spots. And, this is, What we expected for last year. There was really no question that Kia and Hyundai models were driving a giant increase in vehicle thefts across the country and are still behind, higher than last year.

Normal numbers in that area. Because, frankly, not that many of those vehicles have been fixed yet. They, they haven’t been repaired with the software patch, or there’s even a hardware patch that’s been released for vehicles that can’t accommodate the software patch. And. Getting customers in to get those fixes applied has proven to be difficult for Hyundai and Kia because they’re not conducting this as a recall which is something we continue to believe they should do and the National Insurance Crime Bureau, Anthony, I would suggest, they shouldn’t be on the top of your list.

I was looking at their staff the other day, and they are almost all former FBI agents. there. So they’re down and they know what they’re doing over there. So they’re they’ve got it going on.

Anthony: I once had the visit, a visit from an FBI agent. And he had a very much more ethnic Italian name than I did.

And the first, he showed me his badge and I was like, I don’t know what these things look like. And then I saw his name on the badge. And I said, huh, when they started letting Italians into this thing, he didn’t find that funny at all. I still think that’s hilarious.

Michael: They’re not known for their sense of humor when questioning suspects.

Anthony: I was not a suspect. I was not a suspect. Oh sure. No, I was a reference. No, we believe that. I was a reference. I was a reference and yeah, I was, the questions he asked me, I’m like, I think I violated all of these things. I can’t be an FBI agent. Anyway, what was surprising to me was number four on the list.

The Chevy’s. Silverado 1500, and it surprised me because I’ve never seen one of these on the road. Yes, I know I live in Manhattan, but I’ve seen F 150s, I’ve seen some Dodge things. I’ve never seen one of these things, and they don’t sell as well as the, oh, look at that, Ford F 150 is number nine on the list.

Was there Is there some defect in the Silverado that makes it easy to steal? Like the TikTok kids aren’t stealing them. Or is it just for parts? I

Michael: don’t know. It may be that they’re a little easier to steal for some reason than the Ford pickups, but, pickups have dominated this list for years.

And, there’s really, no obvious answer to that question.

Anthony: I understand there’s a Honda Accord and Honda Civic on the list. They’re that they’ve been stolen a lot because they’ve stripped them down.

Michael: There are a lot of Chevy still around us on the road. When you compare it to, I think if you even compare it to Many of the Kia models that, that are way at the top of the list, they’re selling a lot more Chevy Silverado 1500s to the public every year than I would think any of those Kia models pickups are such a dramatically successful.

Part of the market these days.

Anthony: With that, I think we have time for a couple of recalls. Let’s go to Recall

Michael: Roundup.

Anthony: Speak of the devil. Honda. 187, 290 vehicles. This is the 2020 to 2024 Honda Ridgeline. In low temperature environments where road salt is present, the electric wiring of the rear view camera.

Of course it’s the rear view camera! Tailgate wire harness may fatigue and break in the presence of freezing water and salt with repeated opening and closing of the tailgate. These rear view cameras, this is just one of these things where it just oh yeah, people are gonna use a truck in conditions of snow.

Fred: Yeah, this is, you gotta recognize, this is an absolutely terrible environment for electronics. You’ve, you’ve got Sure. Basically molecular salt that’s, drifting around and covering everything. And yeah, they should do a better job. Absolutely. But it’s still, it’s a lousy place to put electronics and I’m glad they put them there, but hopefully they’ll seal them up better in the future.

And the hermetic seals.

Anthony: Yeah. My, my frustration with this is it’s not like a new project. It’s not a new problem. They’ve been building these things, cars. Let’s just even say cars for a long time. They know this, I

Michael: don’t know. That’s, the one thing I’d note here, they continually note throughout their report that this is primarily a cold weather issue.

And with global warming, it’s not going to be a problem. That might, yeah, that might be the case. And, but another clue to that is in the owner notification dates they provide. They say they’re going to start reaching out to customers July 1st, but there’s a They’re saying that’s going to occur.

Usually when you see an owner notification date, it’s one day. It would be July 1st, 2024. In this case, they’ve staggered it, it appears, from July 1st to October 1st, 2024. That suggests to me that they’re conducting this recall from north south, or they’re going to start they’re going to start making these parts available in colder weather states.

First and owners in Texas and Mississippi and Florida are going to be waiting until October before they receive notification. So just a guess, because I don’t see anything about that in the part 573, but it appears that’s the way they’re going to conduct the recall.

Anthony: Well, Ridgeline owners, Keep your keep checking your mail.

Next one we have is from Winnebago industries, 15, 000 plus vehicles, the 2017 to 2025 Winnebago rebel, the 2022 to 2023 Winnebago adventure van. I didn’t know they made vans the 20, 24 to 25 Winnebago echo. That’s with three Ks. That’s not problematic,

Michael: right? Yeah, that’s what’s in there. I cannot imagine they called that.

Anthony: The Winnebago Vita.

Michael: It should be two K’s in there, the E K O. That was a very disturbing slip of the hand by whoever submitted that 573. Yeah. But this kind of goes back to our You know, what we noted a few weeks ago, RV follies and some of the decisions that are made when they’re building these things like they have routed the wires under your driver and passenger seat that, control your airbag system in a manner that when you are adjusting your seat, those wires can be damaged.

Does that sound like good engineering to anyone here?

Anthony: How much money did it save?

Michael: Who knows, but it’s just a terrible way to, to go about building a vehicle. And it’s something that we, could see frequently in, in the RV world where they aren’t really as subject to as many quality checks as you might see at a typical vehicle manufacturer.

And they’re essentially building a vehicle on top of a chassis that’s been designed by and, put through some of the safety Stuff by another manufacturer. So it’s, again, it’s a warning to RV owners that you need to be really careful when you’re buying an RV and, maybe to do a little more than your due diligence to ensure that your RV is safe.

Fred: This is actually an example of the conflict between engineering and capital. There’s the saying goes that there’s never enough money to do it right the first time, but there’s always enough money to do it over.

Anthony: I think that’s Waymo’s motto now. Or, that was what got GM Cruz in trouble. Anyway, I think that’s that’s the wrap of our show. And what did we learn today, folks? Did we learn that Elon Musk is a car salesman with really expensive hair plugs? No! We’ve known that for quite some time. We’ve learned that, maybe AVs shouldn’t be on the road.

Maybe they need some sort of graduated licensing. I like that. Did you guys learn anything today?

Fred: I did. I didn’t learn, I, I learned that you were an FBI suspect. God damn it, that’s not true at all.

Anthony: Okay, look, you were the one chased down by a tank growing marijuana. No, what we learned today is that in the state of Mississippi, there was there’s trailers with video game systems that a 14 year old Michael Brooks got to play with.

You

Michael: It was fun, but it was, it was like summer school, but it was, it was interesting. You could learn a lot as a kid, in Driver’s Ed, maybe not about driving, but it was it seems like a relatively young age to, to start putting kids on the road at 14 and a half in retrospect.

I was all for it.

Anthony: They’re adjusting it for the average life expectancy of a Mississippian. That’s rude.

With that, thanks for listening to our show. Tell all your friends, five stars. Yeah. The deal.

Fred: Yeah. Bye bye. Bye everyone.

Michael: For more information, visit www. autosafety. org.

 

Join the discussion