Self Driving Cars Explained

In this episode guests Dr. Jeff Wishart and Phil Koopman explain the history of autonomous vehicles, why safety is really really hard, that human drivers are not bad and that it is pure speculation if a self driving car will ever be safer than a human.

Subscribe using your favorite podcast service:

Transcript

note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

Anthony: You are listening to There Auto Be A Law, the Center for Auto Safety Podcast with executive director Michael Brooks, chief Engineer Fred Perkins, and hosted by me Anthony Cimino. For over 50 years, the Center for Auto Safety has worked to make cars safer.

External Video: Thank you for taking Johnny Cab. Thank you. Hope you enjoyed

the ride kit. There’s no one inside here, Michael. That’s

impossible. A car doesn’t drive by itself, doesn’t it?

Anthony: Whether it’s Johnny, Cab from the movie Total Recall, it’s Knight, Rider, and Kit, or it’s Herbie the Love Bug. For decades, we’ve been hearing about self-driving cars, how these are the next big things.

And I’ve honestly been expecting it for quite some time. It doesn’t help, especially when this guy starts speaking,

External Video: but eventually all cars will be self-driving.

Anthony: But we’re not gonna talk about Teslas ’cause I hate to break it to you. They don’t actually make full self-driving cars. They don’t make autonomous vehicles.

They make adaptive cruise control just like everybody else. I know some of you are very disappointed, but in today’s episode I’m gonna dive into more and find out really what are autonomous vehicles? How long has this been around? Where are we now, and what might the future look like? Sit back and enjoy.

Dr. Jeff Wishart: Hi Anthony. This Jeff we shared. I’m with the Arizona Commerce Authority and the Science Foundation, Arizona the State Debt Economic Development Agency and a nonprofit research organization Under that, under the ACA. I’m also adjunct faculty at Arizona State University in the automotive systems concentration.

Anthony: So can you walk us through the history of AVs? ’cause it goes back much further than. Then, the early two thousands?

Dr. Jeff Wishart: Oh, yeah, yeah. Automated vehicles have been with us really since ancient times in some ways. Sailboats, you could argue, the first automated vehicle with the auto-tiller with the weather vane connected to the tiller with ropes to keep the sailboat traveling on a single course, even if the wind shifts.

So you could consider that as the, maybe the origin story of the automated vehicle. But the idea of a. A ground vehicle automation really came about in comic books and science fiction novels starting around 1935. So Norman Belgedes of GM presented his vision of automated vehicles in his Futurama ride at New York’s World Fair in 1939.

So his vision included speed collision control, similar to railroads and trenches to keep vehicles in the lanes. Obviously the computer at the time was made this kind of fantastical and not really feasible at the time, but. The benefits of automated vehicles were even, were known then, right?

Reduced need for downtown parking, reduced collisions, higher efficiency, road density, road reduced congestion, and even more family time was known. So there’s a picture that’s that they published of a family playing a board game in their car as they’re driving down the road. So these kinds of, the concept has been around since the thirties.

And so quite some time. The I guess the development of the digital computer in 1950s and sixties really began to allow that the technology to catch up with the ambition. So the first efforts were allowed cockroach-like motion, so sensing, processing, and reacting. They were possible. The sensing and process and reacting were possible with technology back in, in those times, but the machine intelligence of the processing step wasn’t there, so we really couldn’t do everything that we needed In terms of the path planning, the perception and path planning, you could sense something was there, but deciding what it was and what to do about it it was not there.

Then comes Stanford comes around and they’re one of the pioneers of robotics in the sixties and seventies, and they, one of the things they did was the Stanford artificial intelligence cart, which is a little robot cart. They could sense and do a lot of the automated vehicle functions to a reasonable degree.

But when it comes to an automated vehicle on the road, the. Pioneer. I don’t know if I’d call him the inventor, but the pioneer is probably Ernst Dickelmans of Germany. So he, in 1980, he had a Mercedes van that was packed with computers, able to travel hundreds of miles automated driving over in Europe.

And he developed a few of these vehicles starting in 1980 into the nineties. Coming over to the U.S, I think the first one that I’ve seen was the Carnegie Mellon Navlab V teams automated vehicle, it traveled from Pittsburgh to San Diego, averaged about 60 miles per hour, and ninety-eight percent of the journey, which was almost 3000 miles, was completed via automated driving.

So you can see they were able to accomplish a lot, even back in the nineties before a lot of the machine intelligence really took off. But it’s that last. 2%. That’s very difficult. So that’s the kind of beginnings of the automated vehicle industry.

Anthony: That final 2% that Jeff just mentioned, that’s a recurring theme when you talk to people in the autonomous vehicle industry.

It’s that little technical hurdle. It sounds little, but it’s a it’s seemingly insurmountable. And to find out a little bit more about that final little hurdle, we’re gonna turn to Philip Koopman

because coming into this, I remember 10 years ago telling my kid, I was like, your generation won’t even have to learn how to drive these things. I believed the hype and whatnot. And then,

Philip Koopman: oh man, you drank, you drank the whole gallon of Kool-Aid, didn’t you?

Anthony: I totally did.

Philip Koopman: I’m Phil Koopman. I’m a professor at Carnegie Mellon University.

I’ve been working on self-driving car safety for more than twenty-five years. That’s before the DARPA Grand Challenge. Before all that stuff, there was a project called the Automated Highway System back in the nineties, and I was the CMU safety guy on that project.

Anthony: Yeah, I was,

I was like this is of course.

Why couldn’t they drive themselves? Yeah. And then as the more time I’ve spent on this, I’m, a lot of arguments on this is you can just get into your car, fall asleep, read a book, watch a movie, wake up, you’re at your destination. And in my mind, I’m like. That’s a bus.

Philip Koopman: You can actually do that today.

The problem. And people think they people feel justified in doing that today. And we’ve seen videos of people doing that today. The problem is you don’t always wake up live at your destination. That’s the issue, right?

Anthony: Let’s go back before I’m falling asleep in my car and hopefully waking up at my destination.

So from the nineties, you went into the early two thousands with something called the DARPA Grand Challenge. Can you talk a little bit about

that?

Philip Koopman: That was like a next generation of researchers and they had to relearn a lot of the same lessons. So we knew back in the nineties that if you gave the driver automated steering, they were gonna tune out.

And Waymo apparently had to learn that from themselves and said, oh, who knew? It’s we were trying to tell you and you didn’t listen to us, that’s not literally it’s almost true, but you get the idea. We won’t go into conversations, but we knew all these things back in the nineties and they were discovered and that now to be sure, there was a big change with the DARPA Grand Challenge in Urban Challenge, that technology had progressed, the computing had progressed.

So the very first nav lab, when I was a grad student, I was walking through the park on my way to teach my eight A.M. Recitation as a TA, as a teaching assistant. And I assure you the students were not happy. But I would walk past Nav Lab one, which was this like big bread truck full of computers with roaring fans creeping down to the sidewalk at one mile an hour.

So things have changed dramatically. And a lot of it’s just more computer, but also the whole advent of machine learning really made a difference. It made a lot of things possible that they hadn’t figured out. But machine learning, statistical and statistics are great at ninety-nine 0.9.

They have a lot of trouble with 0.00000, I’m gonna, I’m not gonna count the right number of 1%. Statistics aren’t terribly great at that, but that’s where safety lives.

Anthony: Okay. So that 0.0, 0 0 1 event

Philip Koopman: not near enough zeros. Yeah.

Okay, good.

Anthony: Okay. I, there’s a limited amount of time of me counting out here. We’re just having fun.

Philip Koopman: We’re just having fun. So these rare events that humans seem to be okay with humans

are remarkably good. They’re not perfect, but they’re remarkably good.

Anthony: Okay. So what’s an example, like a common example where these AV, these computer systems fail.

Philip Koopman: There’s a story and the stories I can give you have mostly been fixed.

’cause I’ve told the stories and people have heard me fix them. But you know this story that when Waymo was starting, they said we are not gonna worry about construction zones because they’re rare. Okay? And on an individual driver for any particular hour of driving, unless you’re on certain places of interstates in July, during road destruction season, right?

Unless you’re in those places. Construction, is really rare accident scenes. Crash scenes are really rare but ninety-nine percent doesn’t even get you in the ballpark of stuff that every driver has seen many times. So the first hurdle is getting past the stuff that is rare on an hourly basis, but not rare on a driver year basis.

Okay. Okay. One driver in one year can expect to see a lot of stuff. And the car companies are basically there not driving into wet concrete stopping at school buses being able to handle police stop without running away from the police officer. And, I’m calling back to several events that have been in the news.

Okay? Yeah. Or tragically having a, hitting a pedestrian. Arguably through no fault of their own. There’s an interesting discussion about defensive driving, but let’s just say when Cruise hit that poor woman, it wasn’t their fault. Let’s just say that. Okay. ’cause that’s mostly true.

Sure. And then losing track that there’s a person trapped under your car and decided to move the car.

I

Anthony: gotta interrupt Phil here for a second and explain a little bit more about what he’s talking about. What he’s talking about is an incident that happened in San Francisco where an autonomous vehicle dragged a woman.

And to start that story off, we have to meet this guy. This is Kyle Vogt.

Kyle Vogt: Thanks for having me, Emily. Fortunately, our vision remains unchanged. We’re still working on bringing driverless cars to deployment at scale, and it’s been a really exciting year for us. Just in the last few months, we did our first driverless rides on a, on public roads in a major US city.

It’s the first for any company. We’ve got really ambitious plans to scale from here to be in, more locations, more cities, and make a really great product with people.

Anthony: Kyle recorded this about four months ago. Two months after that, he was out of a job quoting from the Washington Post on October 2nd, 2023.

A pedestrian crossing a busy intersection was struck by a regular car Monday night and hurled beneath a cruise autonomous vehicle where she was trapped for several minutes. The cruise vehicle then dragged this woman for 20 feet. The problem is GM Cruise didn’t share their footage and the full details of what happened.

To investigators later investigators found out. And this is what led GM Cruise’s demise. Now, back to my conversation with Philip Coltman.

Philip Koopman: That’s a different problem that may not have happened to anyone who’s listening to this. I hope not. ’cause that’s, I don’t wanna be that driver. Okay. But many people listening to this probably have been involved in a pedestrian collision.

’cause that does happen. And the first time you’re involved in a pedestrian collision. An ordinary person. Now people will just freak out and do something crazy that happens with a small likelihood. I get that. ’cause people are people, right? But most people will stop the car and say, I just hit someone.

What? Say I get out and take a look before I decide to move my car outta traffic. And the cruise vehicle didn’t do that. So that’s an inhuman mistake. And that’s the kind of thing that, that it’s never happened. To any of those cars before you know that they’ve hit and decided actually there was a fatality in, in, in Temporary Arizona, so there have been a few, but this is the first time one decided to move even though they had hit someone because they decided it was a side hit instead of a front hit.

And, because reasons, but it’s what in the world are you doing moving your car without actually being sure where the person is? And so you could say they’ve never seen it before, but a person would probably get that right. People are pretty good about reasoning and about novelty and machine learning is all about did I train on the data?

No, I don’t know what to do.

Anthony: Yeah. So for situations like that, these edge cases, how much training needs to happen in there? Because any sixteen-year-old with a learner’s permit, would you think to, Hey, I’m gonna stop the car. I think I just hit something. I’m gonna go see what it is I hit.

Philip Koopman: Stopping the car sounds like a real good idea.

Yeah,

Anthony: but I. As a normal sixteen-year-old, even someone younger than that would do that as a human. How much training would a machine learning system need for it to understand these things?

Philip Koopman: It depends how you go at it, but the naive approach, the straightforward approach is if there’s a situation you want it to handle, you find a bunch of examples of that situation.

Like thousands of them, maybe fewer, depends. There’s a lot of, it depends if it’s seen stuff close, maybe not as few but it’s not one or two. It’s lot, you show up lots of examples and it pulls out statistical numbers about what to do in those situations. And statistics means lots, right? You don’t do statistics off one thing. Okay? There’s a thing called one-shot learning. There’s a bunch of cool things you can apply. So I’m going baseline. You showed a whole bunch, hundreds, thousands of examples and it learns, but. Where do you get hundreds of thousands of examples of collision people?

We have to use a simulator and now you have to make sure what it’s learning from the simulator is what it should have learned if it were real and did it learn off some weirdness or artifact or simplification of the simulator that will translate poorly to the real worlds now have a testing obligation.

It’s really hard to do this so stuff they see all the time, this is why the 98% is easy ’cause you see it all the time. You got plenty of examples and the rare things. That our high consequence is harder because you don’t have that many examples. And so what do they do? They spend a lot of resources going to test tracks.

They spend a lot of resources. Waymo in particular years ago had this big this big effort where they would set up they’ve got collect data. They would’ve a closed course track, they’d recreate things, and then they’d use that as the basis for big simulation campaign. That, that’s what in what’s involved to do this.

And the other companies since have started doing things like that. Someone has to think about it. If no one ever thought of that and it’s not in the simulator, you’re not gonna learn it. And ultimately there are a lot of aspects of safety, but ultimately the limit to safety of this technology is that if no one ever thought of it, you never trained it, it’s probably gonna get it wrong when it happens for real.

Whereas people are astoundingly good at most of the time getting it right, even though it’s a complete surprise.

Anthony: So with what we’re talking about, these are what SAE would call level four, level five autonomous vehicles where the human’s not in control ever and the machine’s driving it the whole time.

The levels have so many problems. Just as a very high level is

I did it. I said the thing that no one wants to hear. Levels. So I’m gonna turn to Fred Perkins to explain SAE levels.

Fred: Hi, my name is Fred Perkins. I’m the chief engineer at the Center for Auto Safety. I.

Anthony: Help me understand what these SAE levels are.

Fred: Alright. SAE, J-3016, which is a report, not a standard.

Lays out different levels of automation for vehicles just to make it convenient so people know what they’re, they have a common understanding of what the, what they are. There are six levels. Level zero is basically any car or truck made before 1980. Any later cars that include things like ABS and automatic emergency braking or blind spot detection, things that give you information but do not actually control the car, so they’re not considered automated controls.

Level one is a vehicle that it has exactly one feature, either automatic steering or adaptive cruise control, for example, that controls either the speed of the car or the direction the car is headed in. Level two is any vehicle that has more than one. So it would have adaptive cruise control and it would have lane keeping assistance, for example.

And so because they’re combined, that moves it up from level one to level two.

Anthony: Okay. So real quick. So that’s what most modern cars are. They’re level two. Like my car’s not a fancy car. It has lane keeping assist and automatic cruise control. Yeah,

Fred: that’s level two. Yeah. Essentially any car you’re gonna buy today.

Would be a level two car. Okay. And there may be exceptions, but yeah, it’s very common.

Anthony: Okay. And this is essentially what Tesla sells. I just wanna clear that up for people. ’cause they call it full self-driving, but it’s just lane keeping and automatic cruise control.

Fred: According to the lawyers anyway, sorta.

But the thing is, the Tesla pushes the edge of this because these are intended to be things that happen. Self-driving. After a couple of seconds in my car, I get a warning that says, get your hands back on the wheel. Or, do something Tesla though. Will allow people to go for a long time, minutes or miles.

Essentially with having the car do self-control, which was not really the intentional level two, that was really the intentional level three. So the kind of hedging things on that, and again, it’s a report. It’s not as Saj, a third or 16 is a report, not a standard. So nobody has to conform to it. It’s just information.

Okay. So level three self-driving capabilities designed for. Extended duration of both speed and steering control in the car. But it has a requirement that if it ever fails or if you ever get a notice from the car, you, the human being has to take over immediately. This is a huge problem, as you can imagine.

If you were distracted because you’ve been going along for miles, reading a book, picking your nose, whatever you’re doing, all of a sudden the car has a failure. You’re supposed to jump in and figure out what’s going on. Very difficult. People do a poor job of this. It’s a very difficult thing to do,

Anthony: but if you have enough money, you can get that in a Mercedes right now.

Fred: You can get that in a Mercedes, you can get that on. I think BMWs got that, but it’s very restricted. It’s only for basically stop-and-Go traffic on Limited-access highways. It is available, but you won’t go far and you won’t go far fast. So level four is Self-driving. Meaning you’ve got both speed control and you’ve got steering control, but it’s within a specific operational design domain, which is combination of geography and traffic conditions and time of day and all those kind of things for which the car is designed.

Now, you may need human control outside of the ODD. For example, let’s say the ODD covers interstate highways. You may need to have a human control to drive the car to the interstate highway, but once it’s on the interstate highway at level four, there’s no need for additional human control. It’s designed to have emergency fallback provisions that will stop the car safely or put it into a safe state so the human can in fact, read a book at that point, or pick your nose as long as you want.

So that’ll be just fine. A level four.

Anthony: And this is what we see as a row of taxis out in Arizona and San Francisco. Yeah, that’s right.

Fred: Okay. But they’re considered experimental still. I want to point out, but yeah, level four, but no, no individual person can go out and buy a level four car right now.

Just wanna make that point. So these are strictly experimental, the vehicles that are owned and operated by. Companies that are developing the vehicles, the Waymo’s, the, you might say Tesla, who knows about that one, but the Cruise and but I think there’s several others out there, but those are the major ones.

So level five is Nirvana, it’s unattainable, it’s no human control is needed. Self-driving unlimited. Operational design domain, unlimited conditions, unlimited time of day, so that will probably never happen. People are designing for that in certain circumstances, but it’s like an extended level four where they just figure how to put in the steering wheel and the brakes,

Anthony: I don’t, I, and that’s what we see in the movies. That’s, we like the minority report, my mother

Fred: of the car, all those kind of things. Yeah. Great stuff, but it’ll probably never happen.

External Video: Levels.

Levels. Yeah. I’m getting rid all my furniture, all of it. And I’m gonna build these different levels, with steps and it’ll all be carpeted.

Anthony: Alright. Do we all understand levels now? Great. Let’s go back to Phil.

But

Philip Koopman: yes, we’ll take that as red. Yes. Also level three, because level three, the person’s been told they don’t have to watch the road. So when activated to level three is, which is the right, the the traffic jam pilot stuff. If the driver has been told you don’t have to watch the road.

It’s all the same as far as I’m concerned. Either the computer’s driving and the person’s driving and watching and the person’s not watching. Guess what? It’s all on the computer.

Anthony: My question around this is, so level two, which is it means a wide range of things, but my simple car has level two systems.

It has lane keeping assist. It has Adaptive cruise control. It has automatic emergency braking, right? Works. Okay. Pretty good. Good. And so what I’m wondering is like no one’s gotten automatic emergency braking, like it hasn’t been defined it seems, and it doesn’t work in all cases.

Shouldn’t things like that Or even just lane keeping assist, like my car gets confused when I’m on the highway and an exit’s coming up ’cause it loses sight lines for a bit. Shouldn’t those pieces be solved before taking the leap to com? Humans not involved at all.

Philip Koopman: If you wanna spend money funding a level four robot taxi company, our capitalist society lets you do that so the reality is.

What’s your objective? If your objective is to make running money, running a cheaper taxi service, then robotaxis are the what you invest in and that’s okay. I don’t have a problem with that. If your objective is to improve road safety, RoboTaxi seems like an awfully expensive way to going about it, given that we already have proven ways of proving of improving road safety that you don’t have to invent new technology for.

So it depends what your objective is. Okay. Got it. And the thing is, if it’s their money, they get to spend it, we’re back to capitalism. They can, they’re, no one’s re they’re not there to improve road safety. They’re there to make money. And so telling them you should spend your money on road safety instead of making money.

That’s a non-starter in our society. And I’m also okay with that. People say we should spend this money on X. It’s it’s their money. Don’t tell them how to spend it. That’s not how it works in this country.

Anthony: But a lot of these companies, they make the argument that these are safer than people.

So they are selling, saying, Hey, this is safer, better.

Philip Koopman: Okay. So that’s a whole different thing. You spending your money and what are you selling? And they’re not in this business. They’re raising billions of dollars to save lives. I. They’re raising billions of dollars to make money.

That, and that’s okay again. That’s okay. That’s our society now. They’re saying, you should let us take these risks as we develop the technology because we promise to save lives. Okay. Now that’s interesting because there’s no data to show that they will save lives. In fact, the very latest report.

From Waymo, the 7 million mile report, which by the way, their technical stuff is fine. Their public relations messaging I have an issue with, but their technical stuff is fine. Their report very clearly says 7 million miles is not enough data to know how fatalities are gonna turn out. And that’s absolutely right.

’cause at a hundred million miles between, it’s 7 million down, 93 more million to go before we have any idea how this turns out. And that’s where we are. So if you wanna say. We want to save lives, so you should give us money to spend developing this technology. That’s great. If you wanna say, we know that our technology, to be clear, this company is not saying this, but this is something one could say, I’m doing it for a fact.

You should let us kill people now. ’cause someday we’re gonna save lives. The problem is that someday we’re gonna save lives. Part is completely speculative and hypothetical. So the more nuanced messages that they’re actually saying are, you should look the other way. If we disrupt traffic and a bunch of other things.

It’s because we’re saying, you should look the other way and put up with inconveniences and these interfering with emergency responders and all the things that have been happening, you should put up with all that stuff. ’cause we’re saving lives. It’s you should put up with the stuff because they hope that they will save lives.

And the distinction matters from a public policy point of view. Now, do they actually believe they’re gonna save lives. They actually really believe it, and they have a lot of arguments why they think that will turn out, but it hasn’t turned out yet. So we don’t know.

So how do you see the best systems, these best AV systems compare to responsible human drivers today?

So we take out all the drunks, we take out all the texters.

You’re up at what, 200, 250 million miles per fatality now. Yeah, it’s a, yeah, it’s very hard to do. Those are approximate numbers. It’s very hard to do that math as it turns out. So that’s not an accurate number, but it’s okay. It’s a lot better.

We don’t know because we’ve only got one company has 7 million miles, 200 forty-three million more are able to go before we have something like 70% confidence at. We don’t know how it’s gonna turn out now and the in fact, I was just having this conversation with someone just before this.

Okay. There’s a balance that if you teach the computer to drive in a safer, saner better way, okay. That will improve outcomes. But if the computer driver has software anomalies or. Hits things it can’t handle too often or just does bizarre. There’s a bad software update in 10 cars crash that day because it was a bad update and we’ve just seen bad updates be in the news.

That’s how computers fail. They fail a whole bunch at once and you had a lot of good days. You have one bad day, it’s gonna take a whole bunch more good days to make up for that and nobody knows how that balance is gonna turn out. Are they safer? They have the potential to be safer.

And in the small, on an everyday basis, there’s gathering data, showing that for low severity stuff they can be safer. But jury’s still out on what they’re selling, but they’re not selling that. They’re selling, we’re gonna reduce fatality. They’re selling. Look at all the fatalities. If you wanna judge ’em on the thing they’re selling, the answer is nobody knows.

Anthony: Okay, so from Jeff, we’ve learned how far autonomous vehicles have come and Phil’s a lot more insight into that last hard bit of how they’ve developed and. And how they’re operating today. But if they can’t prove the thing that they’re claiming that, hey, we’re gonna be better than humans, and how are these things getting on the road?

I don’t understand it. So let’s go back to Fred Perkins. So right now, as a human to get a driver’s license, or better example for Robo-taxes is to become a bus driver like you need. Training, you need a commercial driver’s license. They just don’t grab anybody randomly off the street and say, you’re now a bus driver responsible for these 60 passengers.

Fred: That’s correct.

Anthony: Yeah. But if I wanna be an autonomous taxi company, a robo taxi company, I don’t have to prove anything, do I?

Fred: Yeah, that’s right. What you have to do is you have to go to the state authorities and whatever state you wanna operate in, and, make a case that your car is safe enough to go and somebody will look at that and say yay or nay.

And if they say Yay, then you’re good to go. The problem is these are very complex and if there’s no framework for the states to audit these vehicles and audit the audit, the safety of these vehicles, it’s extremely difficult. Engineering challenge to go in and figure out if. They in fact do have enough safety.

The standards that have been written typically are very loose. They will often say something like safety means the absence of unreasonable risk. Period. What does that mean? What the hell, right? What does that mean? What is, what’s reasonable and what’s not reasonable? And what they’re really doing is they’re throwing the whole problem over to the courts to decide after the fact whether.

An injury was caused by an unreasonable amount of risk or a reasonable amount of risk. So it’s it’s a real problem. From that perspective,

unreasonable risk. I don’t know. I think it’s time for me to call on the big guns to call on the lawyer. Michael Brooks.

Michael: I am Michael Brooks. I’m the executive director at the Center for Auto safety.

Anthony: Michael, I’ve was just talking to Fred and I asked him, how is it these things are getting on the road and you’re a lawyer?

And from that policy point of view, how is it possible that my kid had to take a written driver’s test, has take a physical driver’s test and prove all sorts of things, get insurance, but I can say, Hey, I’m a robo taxi service, let me free.

Michael: There’s not a lot out there that’s gonna keep a company that’s wanting to put these vehicles on the road out.

There are some state laws that require them to have insurance, basically requiring them to prove that there’s a. Big problem or a crash that they, have the money to, you know, assist victims or that type of thing. They’re basically just making sure that, this is an actual legitimate company that’s coming onto the scene with this fancy new vehicle that drives itself.

Other than that, there are no federal laws that prohibit the deployment of autonomous vehicles. They do have to meet, passenger vehicles do have to meet the federal Motor Vehicle safety standards. None of which address automated vehicles or any of the systems that are involved in the driving that computers will be doing in those vehicles.

So it’s a, it’s basically the wild west at this point. California has, taken somewhat limited steps to require some crash reporting of sorts. And the federal government has done the same thing and it’s standing general order, so they’re, keeping tabs on incidents.

None of that’s gonna prevent you from. Building one of these things today and putting it right out on the road and offering it as a service. There’s just not a lot of real high bars. You don’t have to prove, without a doubt that your vehicle, your autonomous vehicle is safe before you put it on the road.

Anthony: But right now, to drive a car, I imagine the federal guidelines say, Hey, there has to be a driver in the driver’s seat. Are they getting some exemption saying, no, we’re not gonna have a driver.

Michael: Now there’s there, while federal regulations are written with the assumption that there’s a human driver in the driver’s seat, there’s no requirement for such a thing.

Anthony: Oh, no that’s disturbing and frightening.

Michael: It’s, it sounds so obvious that, but it’s, I guess it was so obvious to when they’re drafting these things that it’s something you don’t need to require. Of course a car’s gonna be driven by a human,

Anthony: but some of these companies have tried to.

Wanna put out cars that don’t have steering wheels and brake pedals and gas pedals. So I, if I, but they would need an exemption. Okay.

Michael: That’s where, if you’re gonna do something that’s required by Federal Motor Vehicle Safety standards, you’re going to have to try to get an exemption and, no one has really attempted to use those provisions. We’ve seen a couple of exemption petitions filed and withdrawn for vehicles. I believe the latest was the GM origin or the cruise origin. Which kind of, that, that plan fell apart in the wake of what happened with cruise. There’s not been a lot of manufacturers coming forward and saying they want to build their own vehicle from scratch.

A lot of these systems are being put into vehicles that are already being built to meet motor vehicle safety standards. GM crews using the GM vehicles that are already ready to go on the safety standards. And Waymo using a variety of vehicles some Chrysler vehicles, some Jaguars.

There’s they’re basically a lot of these companies just using vehicles that are already meant to be driven by a human and putting in an interface over that, that, that makes it an autonomous vehicle.

Anthony: So we’ve learned there’s no, that it is the Wild West. There’s no real standards around how these things should operate, what they should do, what is the Center for Auto Safety doing to try and make it.

Hey, there’s some base level requirements,

Michael: basically what we’re doing and we see. A vehicle being deployed by crews that completely ignores a big chunk of safety, which is post crash safety. And I. I think what our role is in that process is just making sure that the public, that policy makers, that, folks in the government are aware of some of these gaping safety issues that exist in the autonomous vehicle industry.

They make a lot of promises, not just GM Cruise, but Waymo has been, saying this is coming around the corner since the 2015, 2016 or so, when I got an opportunity to ride in one, they said they would be, operating in 2017. They’re, they are, that this point.

Waymo is still operating in some, certain very small areas of the country and they plan to expand. But this, Robo taxi revolution is not really happening. And I think one of our roles in this has been to push back against all of the aspirational thinking that goes into building these massive corporations centered on a technology that may not be mature yet.

There’s a lot of hype. There’s a lot of soundbites, there’s a lot of folks with a lot of money putting that money behind advertising and behind promoting these vehicles behind lobbying to change state and federal laws to make it easier to introduce them. And, we’ve found that, our best role in the process is pushing back on a lot of that misinformation.

These promises to communities like the disabled community, that driverless cars or robo taxis are going to save them. We don’t believe that’s a hundred percent genuine. We think, that the companies have been using a lot of those arguments as a front to cover up some of their deficiencies while getting them favorable looks from, government and from people who are potentially going to regulate them.

Anthony: One of the big complaints we hear from AV industry is anytime there’s an accident with one of them that they say the media blows it outta proportion. Oh my God, yes. A robo taxi smashed into a fire truck or drove into wet cement, and the media has a field deal with it.

In my mind, that’s like when there’s a plane crash. It’s a very rare event. It shouldn’t happen. It’s unexpected. And that’s why we get surprised. So with AAVs, when they have that, ’cause they’re sold as these are better than humans. This is what’s happening here. We shouldn’t be so surprised.

But what’s surprising to me is that NHTSA is not like the FAA the Federal Aeronautics administration, they actually have engineering guidelines.

Michael: I think that’s the federal Aviation administration, right? Am I wrong?

Anthony: What?

Maybe, whatever the Federal Aviation Administration actually has engineering guidelines or regulations saying this is how your plane should work and operate in these conditions.

Yeah. Whereas NHTSA doesn’t have that with cars.

Michael: You know it’s way more specific than that. So the FAA has type certification, which basically means they are. In the room looking at all the plans for every single part on that airplane to make sure that it is meeting FAA standards. And it’s a pro, you’re essentially, it’s a co-certification process between the regulator and the manufacturer.

You always have that regulator looking over your shoulder, or you should, as we found out in some of the seven 37 max. You should have a regulator looking over you all the time to make sure that you’ve got the appropriate certifications and that aircraft is gonna be safe. Safe with cars.

You’ve got none of that. You’ve got a self-certification program that’s set up that, manufacturers have been bound to since. The late sixties when NHTSA was created, which basically means you are responsible for conducting tests and making sure that your vehicles match federal standards and then signing off on that.

The government at NHTSA may do some limited auditing to make sure that you are. Actually doing your homework on the backside and doing those certifications. But ultimately there’s, there’s not the collaborative process where you have a government agency ensuring and making sure that you’re building safety into all of your vehicle components.

And at the start of that you said, you know how AV manufacturers tend to suggest that the media is overreacting and, I would suggest that is, they’ve set that own trap for themself. No one else is out there promising that these cars are going to be better than humans.

It’s just the guys trying to sell them. And so when a car ends up in cement or runs into the back of the bus, or can’t tell what a fire hose is and interrupts firefighter operations or drags a pedestrian that’s already been struck by another people. Another car and end up ended up under the car When those things happen, most of us look at them and go, I would never do that.

That wouldn’t happen to me. Why is this car that’s supposed to be better than a human doing that? That’s why it generates headlines, because there’s such a clear contradiction between those events and the results that continually are promised to us by the AB manufacturers. And now

Anthony: I’m just at a loss.

I, I. I’m struggling, like, how do I wrap this episode up?

Michael: There’s, I’ve always, it’s hard to wrap something like this up in a nice pretty bow. There’s just far too many questions that are left to be answered and a lot of it depends on, how consumers and how the public accepts these vehicles. We’ve seen that they’re not really willing to accept what.

Cruise was offering in San Francisco. We saw a lot of problems there. Waymo’s having a few of those similar problems and we’re seeing a little push back on them. But it’s, there’s a lot that’s going to continue to occur in this space, and it’s pretty clear right now that even the major players don’t have all the answers and it’s.

There, it’s a very speculative industry and I don’t know what kind of music you’d wrap the episode up with. It would be something, a perplexing style music of some sort. And it’s it, something that leads the listener hanging and doesn’t really give them any answers.

’cause we don’t have those yet.

Anthony: I’d like to thank today’s guests. Dr. Jeffrey. Wishart, Philip Koopman, Fred Perkins, and Michael Brooks. Hopefully this helped you understand a little bit more about autonomous vehicles. If you’d like to find out more information, visit Autosafety.org.