Part 2 with Philip Koopman and William H. Widen

Our guests stuck around for another episode. In this one Phil explains why “driving modes” make much more sense than SAE Levels and I have to agree. Levels are meaningless since every OEM seems to make them mean whatever they want. William explains more about liability and how the laws are not keeping up because there has not been a pressing issue from the public. Actually, the way he says is much better but you’re going to have to listen to find out.

Subscribe using your favorite podcast service:


note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

Anthony: You are listening to There Auto Be A Law, the Center for Auto Safety Podcast with executive director Michael Brooks, chief Engineer Fred Perkins, and hosted by me Anthony Cimino. For over 50 years, the Center for Auto Safety has worked to make cars safer.

Philip Koopman: Hey,

William H. Widen: listen. Good morning. Good morning.

Anthony: Traditional welcome, welcome back. Joining us again, we have Professor William Wyden from the University of Miami School of Law and Philip Koopman, associate Professor at Carnegie Mellon University and man who will not call a Tesla recall, a recall. He wants to call it something else.

Philip Koopman: It is a notification of safety defect. I like that and then the Tesla fans can’t hassle you about saying it’s not a recall. Of course it is. It’s a recalls. The recall is the administrative process and the over your update is the remedy, which this is like the only show I ever hear that gets that right.

But yes, we’re on the same page there.

Anthony: Excellent. I love that. I wanna try and start using that. I wanna follow up on what we were talking about last episode. So let’s jump into a little bit more on duty of care. I think, Michael, you had some specific questions there.

Michael: If there’s there’s we talked in the last episode about the SAE standard and the levels one through five and how those have been, written to apply there.

We use them a lot. It’s sometimes it’s hard to refer to these different systems without reference to the SAE levels, particularly we’re talking to someone who’s not that familiar with these things, but,


Philip Koopman: know, the SAE level, all the other kids jumped off a cliff. Would you jump off a cliff?

Michael: No, I would not. And that’s one reason why, you know, when I first read your book, Phil, I was really attracted to the driving modes idea. I think it’s much more explainable to people than using numbers to denote these things. You’re actually, you’ve got the testing mode, you’ve got, the other descriptors that kind of capture what you mean versus saying level two, which is, essentially meaningless.

Philip Koopman: And nobody even knows what level three is really. Every time I hear described just about everyone gets it wrong compared to the document. Yeah, I agree. So why don’t I dive in and just do the four modes to kick us off since they’re basically my fault, so I might as well be the one to do that.

All right, so I have this proposal that’s been kicking around for a while. And it shows up in interesting places that instead of the levels are for engineers, but. We need something that works for regulation and works for ordinary folks, and people get so confused about the levels, especially two and three.

And level two plus is illegal according to the standard. Not a lot of it. It’s invalid and who knows what that means. So instead I took a bunch of bites at this apple and I finally got the one I like that I and it has four modes. One mode is testing. If you’re testing, then you need a qualified test driver or somebody like an employee or a contractor.

If it’s testing, it should not be in the hands of the general public. Done there. Okay. Then there’s three other modes. One, it’s conventional, it’s a car. It might have adaptive cruise control, and I hope it has any lock brakes and emerge and automatic emergency braking and stuff like this.

Okay. That’s a conventional car. There’s a autonomous vehicle, I think robot taxi, robot truck, and an autonomous vehicle requires no human attention to drive. It just drives itself. There’s no standby driver. There’s nothing else which leaves the messy bit, the messy, the awkward middle. We have a whole paper calling it the Awkward Middle.

And what I did in essence and J-thirty-sixteen is the way it’s constructed is so messy that it’s not, what I’m gonna say is almost true, but not quite true is levels two and three. And it could be level one, but no one actually does that. It’s working on that. So it’s level two or three put together because.

In the end, there’s supposed to be a driver in the seat working with the automation and they have some jobs, and it’s all a continuum. And breaking it between level two and level three is completely arbitrary from my point of view, from a consumer and regulatory point of view. So anything where there’s a human, you’re taking credit for a human being around to help the automation is supervisory mode.

The human supervisor, so the. If the car isn’t helping, the person drive, the car is driving and the person’s helping. Even when you have automated steering. Even on a level two system where it’s the car’s driving, don’t tell me you’re driving. You’re not even holding the wheel, you’re glancing away the road.

You’re playing with your phone, right? You’re not driving, the car’s driving and you’re trying to pay attention to supervise it no matter which, where you are on the level two plus level three scale. So we call that whole thing supervisory. So you have testing, which is self-explanatory. You have conventional.

Conventional. And the distinction we make is if the car is not sustained steering, you’re conventional. Once the car sustains steering, then your supervisory, if there has to be a person somehow involved, and when there’s literally no person responsible for driving, now you’re up at autonomous.

Anthony: It’s four modes.

What do you mean by sustained steering? What’s the length of time?

Philip Koopman: So that’s a J-thirty-sixteen concept. ’cause I tried to be compatible. Okay, so sustained steering. The thing is there’s the good difference is there’s a thing called lane keeping assistance and lane centering. So lane keeping assistance the image I like is when you get little kids bowling and you put the guards and the gutters and the ball bounces off the guards.

When you’re going down the highway and it’s steering you back into the lane when you hit the dotted line. Most of the time it does that anyway. No one would confuse that with good driving. So that’s not sustained steering. It’s momentary intervention that’s bouncing you around, right?

But as soon as you have the, that the car’s actually doing the steering lane centering. So lane keeping assistance is bouncing around. Lane centering is what it says. It’s keeping you in the lane for more than a count of three or a count of five. The board is fuzzy, but you think if you can basically.

Setting aside warnings, right? If you can take your hand off the wheel for five seconds, even on a curve, and the car’s gonna track the lane that sustains steering. Got it. And the lane centering feature is the typical bottom feature. You hear about that has that capability.

Anthony: Okay. So the next time I have it’s enabled in my car, I should just tell my wife, no, I’m supervising the car.

It’s not driving itself.

Philip Koopman: No, it is it is driving and you’re supervising. Okay. Yeah, the degree it’s steering man, what do you call, there’s, you got gas, you got steering. If you’re doing both, I call that driving, now it’s actually vehicle motion control. But I think it’s more, I think it’s a better mental model to say you’re supervising the automation than to say the automation supervising you.

’cause that’s just not true.

Anthony: Okay. Right either, either way. It’s just a tough sell for me to put.

Philip Koopman: Yeah, but it’s I call it supervisory. You’re supervisoring the operation of this automation. To be clear, you retain some or all responsibility for safety, but we should acknowledge that people are terrible at that and the regulation should accommodate that reality and not set the people up for failure.

So we want, that’s a separate side, a different top threaded discussion. But the idea is if you’re testing, you would better be a chain test driver and the car is gonna have defects and no retail customers should be testing. That’s just a really bad idea. If it’s conventional, you’re driving.

Okay. And you’re driving. So it’s fine if it’s a robot taxi, fall asleep in the back, that’s the point. And if it’s supervised. You are responsible for providing some aspect of safety and depending on the car, that can vary wildly from it’s okay to watch a movie but be there when the bell rings to, jump in every 10 seconds.

’cause this thing just knows about lean centering, doesn’t know about obstacles, anything in between. Any attempt to divide it more finely than that falls apart. We’re just gonna call it supervisory. And the good news is you can actually create rules that can accommodate that whole span with a very simple set of rules.

Anthony: Okay. And that supervisory mode, you’ve talked about we’ve talked about that in a separate episode where humans are really bad at saying, okay, the car’s in control for the next 10 minutes, and then, oh my God, I have to take over.

Philip Koopman: In supervisory, what you want is the car is responsible for managing the driver’s attention.

So the car is, so if the car concept is it’s okay to watch a movie, then the car has to be able to deal with the reality that people take a long time to get disengaged from watching a movie. If the car says, I expect you to notice the disabled fire truck in your lane with a three-second warning, then the car head better be doing gaze tracking and make sure that your eyes are on the fire truck if you’re supposed to notice and react to it.

Michael: So when it, when

it comes to the duty of care in this situation, functionally, you are driving yourself down the road and say you enable autopilot, or whatever the system is. That is a supervisory system. There’s a kind of a handoff. You’re not just handing off control of the

Philip Koopman: vehicle. Yeah. It gets tricky.

You’re handing off,

Michael: off duty of care. You’re handing off a legal concept to your car anyway.

Philip Koopman: Right. And so it isn’t about who’s driving from a regulatory and legal point of view. It’s about the handoff of the duty of the care. That’s exactly right. And that follows the who’s driving idea.

Lemme do the three easy ones first in testing the manufacturer. Should have, should be responsible for making sure the test drivers are qualified and behaving responsibly. You can’t pin it on the poor test driver. It’s gotta be the manufacturer has to have a lot of skin in the game for testing ’cause they’re putting potentially defective software out there.

And who knows if the test driver can even counteract the defect. So for testing, the manufacturer has to be front and center for conventional, guess what, it’s the human driver, same as it was always was. And for a robot taxi. There should be a stated duty of care for Robotaxi so people don’t have to fight their way through the court system to get that worked out.

It’s inevitable. It will be end up there and we should not put obstacles in the way of people getting there. We should just say, okay, it just Robotaxi duty of care. Is the manufacturer done? Let’s move on to the hard one, which is the messy middle.

Michael: And so at that point

you are essentially,

the car has to make sure that. The human is capable of taking over at that point, right? In order for a duty of care to transfer. There’s a lot that goes on in the

Philip Koopman: handoffs. Let, yes. Yeah, go ahead. Bill.

William H. Widen: The I would say that you have to understand what the strategy is of companies when a human is in the loop.

Okay. In a level two Tesla’s owner’s manual says you have to be ready to take over at all time. And I think at one point maybe it still does, you were supposed to keep a hand on the wheel. Okay. We’ve seen ads where Elon Musk, it still doesn’t do that.

Philip Koopman: It’s, it still wants you to keep a hand on the wheel, even though some testing shows you can go 30 seconds to two minutes with the hand off the wheel before it gets upset.


William H. Widen: But so the point is that the human. In the level two plus, plus Tesla, assuming you agree, that’s what it is. They’re the receptacle of liability. Tesla has mostly been successful in saying when there’s an accident that it’s the human driver’s fault, because the concept is the human driver is responsible at all times.

It gets a little more complicated with the level three like Mercedes, but even Mercedes contemplates that the human driver is responsible for safety. And so the way that. Those companies approach things as long as there’s a human in the loop, the duty of care remains with the human.

Philip Koopman: But let me unpack that.

So in any level, two, two plus, two plus add as many pluses as you want right now. The duty of care stays a hundred percent with the human. And if there’s a crash, it’s on you to prove manufacturing or design defect. Good. Good luck with that. That’s really tough. For level three. If you have a plain reading of what level three is supposed to be, the duty of care should transfer to the vehicle.

When you press the Go button and transfer back 10 seconds after you get the warning, that’s the European ALKS standard. But in fact, when Mercedes says We assume re we assume we’re responsible. What they mean is you can sue us for product defects and you actually retain the duty of care even while you’re watching a movie, which is problematic.

William H. Widen: It seems to me it’s almost a deceptive practice to say that we give you your time back when in fact the price of getting your time back is you are liable if the machine hit someone. It sounds like they’re setting a trap. Autopilot is on, are they setting a

Michael: trap in a way?

Philip Koopman: No. There, there’s a name for this.

It’s called the Moral Crumple Zone. The idea is you have a piece of equipment that you know as a design problem, and this is not unique to cars. This happens a lot. I’ve seen this in action in many places. You have a system, equipment that you know is gonna malfunction and cause a loss, and your plan is to figure out which human is nearby conveniently, and blame them instead.

Anthony: So what we were just talking about, this is what I wanted to ask you from your article. This is from last August and I’m gonna quote from it. Second, there’s the th thorny question of liability Following a takeover request, does the operator of the level three vehicle potentially have liability for any accident immediately following a takeover request or only for accidents occurring after a grace period, such as the 10th second specified for ALKS.

So this is

Philip Koopman: the wrong number. Zero’s the wrong number. We saw reports of Teslas having crashes where the autopilot disengaged less than one second before the crash. There’s, there was speculation as to intent, which I don’t know intended. There are valid technical reasons for that to have happened with no malicious intent.

So I don’t know either way. But it’s unreasonable to say, alright, the car drives you over a cliff and a 10th of a second before it goes over, it says, oh, by the way, you got it. It’s your fault. You know that, that makes no sense at all. And so the it can’t be zero. And the question is, what’s the grace period between the time the alarm buzzes and says, Hey, human, get back, jump back in here.

And how long—and the humans actually should be held responsible.

Anthony: Because the Mercedes system that’s restricted to what, traveling below 40 miles per hour only there has to be a car you’re following in front of you. So it’s scenario of where you’re gonna get into an accident is relatively

Philip Koopman: minor.

They call it traffic jam Pilot.

Anthony: Yeah, so you’re not gonna be traveling at 60 miles per hour plus and hitting some object at 60 miles per hour. If you get into an accident, you’re getting into a fender bender, which hopefully, knock wood, no one gets here.

Philip Koopman: Or the reason there’s a traffic jam is there’s a crash and there’s dazed crash victims walking out into the highway and I hope it sees them.

Anthony: Oh boy. See I

Philip Koopman: assume the Mercedes would happen. That’s bring, bring the safety guy to the party. It’s always a lot of fun. Yeah. Great.

Anthony: Let’s talk about the weather now.

Philip Koopman: Hey.

Fred: Lemme jump in. I got a question. Several years ago, Elaine Hertzberg was killed in Arizona by a self-driving car. Time was owned by Uber.

They’ve gotten out of it, but she was killed. There was no duty of care as far as I know. Based on what you’ve been telling me, what was the legislative and legal response in Arizona to that happening? Because the, it was legal for them to operate the car Elaine. Hertzberg was killed by the car. You would think something would happen, right?

There’d be some correction of the regulations.

So what did happen? I.

Philip Koopman: The state. The state did a duck and cover and Bill can talk about what happened to the safety driver.

William H. Widen: The, yeah, the first point is that the prosecutors decided not to prosecute the company. Okay. And if one had a theory of a computer driver, one might have said, oh, you could prosecute the company.

Now, this was during testing and what was found was that the test driver apparently was distracted by looking at her phone. And so she was not able to intervene, although,

Philip Koopman: to be clear, bill, that was disputed, but continue, that is the official story. But she dispute that the official line is that she was distracted.

William H. Widen: Yep. And that means that she was not doing her job of paying attention at all times.

Philip Koopman: She, to be clear, to be fair to her, she says she was distracted by a company-mandated phone as opposed to personal entertainment. That’s the story, but can but continue either way, but however, she, the point was it was alleged that she wasn’t doing her job.

William H. Widen: She was there specifically to prevent the accident, and she was charged with, I forget in Arizona what the technical term is, but essentially negligent homicide.

Philip Koopman: And she copped a plea, is what it boiled down

William H. Widen: to.

Fred: So did the legislature jump in and say this is an obvious defect in the laws. We need to correct this.

William H. Widen: No moral, they didn’t do anything. Moral rumple.

Philip Koopman: That’s the moral crumbles in inaction. Uber doesn’t do the car, and if the human is

William H. Widen: responsible, then there’s nothing more to do.

The idea is that the human was there to present, prevent the accident. The human didn’t prevent the accident. The human was the proximate cause of the death, and it had nothing to do with the car. Now, if you, if this had been in testing mode. Okay. Under Phil’s four levels, right? One could say that no, the company has responsibility for accidents that happen while they’re testing

Philip Koopman: For supervising testing.

Bill, bill mentioned J-thirty-eighteen earlier. That document that SAE standard was heavily revised to incorporate the lessons learned from this over ATG fatality. And so any company who’s not following it is risking a replay of that outcome,

William H. Widen: a risking a replay?

Fred: Physically, but not necessarily legally.

There’s still,

Philip Koopman: The replay was pretty favorable to Uber legally. They had other issues, but legal repercussions weren’t among them. So under your modes, I’m,

Anthony: I’m still a little confused here. So when I go from supervisory to fully autonomous, I don’t remember what exactly did you call that?

Final mode.

Philip Koopman: Oh, autonomous. Yep, autonomous. Okay. So it’s autonomous. Hey, so I picked the right word. ’cause you came up with it easily instead of it obscure. Look at that. That took a lot of work. You’d be surprised how much work it took to come. No, I’m sure

Anthony: that one, that was probably the hardest part.

Autonomous mode this is my. At what point is it that I, as the owner of the vehicle in autonomous mode, at what point is it separating who’s liable? Am I the owner of the vehicle or is it the computer driver that’s liable? And is it, Hey, I just got an over the air update yesterday. Do I have a new computer driver?

Like who’s responsible? Or is it just my insurance company?

Philip Koopman: So once you have the modes, then you can hang liability. A LAYERABILITY framework off the modes. And the concept we have is that if it’s autonomous, the manufacturer is presumptively responsible for anything bad that happens because they’re the only ones who can really do anything about it.

If the software has defects, what’s the owner supposed to do about that?

Anthony: Keep my lidar and cameras clean,

Philip Koopman: and so there are issues like did you clean it? Did you maintain it? But if you want the manufacturer to be the first stop for your course, and the manufacturer has two options, one is to say, Hey, my car’s not gonna operate if the lidar is too dirty to operate.

That sounds like a good idea, right? My car is not gonna operate at the maintenance, hasn’t been done. And the, and then the other thing they can do is if somebody maliciously circumvents it, the manufacturer could go after the owner. I mean they could, but the victim shouldn’t have to sort that out.

The victim of any crash needs one-stop shopping. And it should be the manufacturer. ’cause they, by far has the, have the most control over the outcomes. So

Anthony: I think based off of that, there’s no way any manufacturer is gonna sell an autonomous car to an individual. Within my lifetime. I.

Michael: I don’t think that was gonna happen either way.

Philip Koopman: I, there are other reasons there. There are really good reasons like leasing so that you, so that they can get fleet maintenance. ’cause now you’re talking like aircraft certified technicians maintaining this technology. It’s gonna be that way for a long time. So a fleet owner, a fleet operational model makes a lot more sense.

Now you could, you can timeshare a jet. I’m told I’m certainly not in that income bracket. You can own, quote unquote own a jet, but it isn’t like you’re out there changing the oil yourself. And so I think it’s gonna be more like that.

William H. Widen: One of the fallacies that the companies I.

Argue about is they say, look, this technology saves lives and if you pro, if you prevent us or slow us down, you’re killing grandma, essentially. And those claims are really not well-founded based on the number of miles driven, the diversity of the systems and so forth. Okay. But here’s the fallacy.

They think that if we reduce accidents by. 90%. Okay. That would be certainly a great thing. But in the 10% of cases where there’s still an accident, then the question is okay. You should perhaps Phil, I would say the manufacturer should be responsible if in the those 10% of cases the AV behaved badly.

To an extent that we would hold a human who behaved the same way responsible. What the industry’s essentially saying is no. We’re such good guys because we saved all these lives, that we get a free hit in this particular case, and we shouldn’t have any liability. That’s effectively the position that they’re taking.

In other words, they shouldn’t have. Liability for a discrete accident case because sure, that’s gonna happen, but you should credit us with all the lives that we saved and so we shouldn’t have liability in that individual accident case. I.

Anthony: How do you get over that? ’cause everyone makes that assumption that humans are horrible drivers, computers are great, computers are perfect at everything.

Philip Koopman: Humans are surprisingly good, especially if they’re not drunk. And there’s ways other than robotaxis to mitigate drunk drivers ask pretty much any developed country except ours, and they’ll tell you how you can do that. The. But the issue is that and there’s only evidence that computers seem to be getting better at low severity crashes.

There’s no evidence whatsoever about high severity crash in fatalities. The latest Waymo report comes out and says that we just, jury’s still out, right? But the issue is that you, there’s more to life than net safety. There’s redistribution of harm. There’s whether you are acting in negligent way when you cause harm.

And Bill said let me try and make this very concrete. If you run through a red light, if a computer, Dr. If a person runs through a red light and they have a million miles of not a single traffic ticket, not a single thing, and they run through a red light, they don’t get a free hit for having hit a pedestrian, they’re.

Just as guilty of hitting the pedestrian as someone who has a bunch of tickets. Now, maybe the outcome’s different. Maybe the judge gives them some mercy and some leniency. But guilty is guilty, right? And the car companies don’t want to participate to partake of that. They want him to be not guilty ’cause they had the millions of good miles.

So that’s a different, that’s a double standard. And what we’re saying is they should be held to the same as humans. Humans don’t get free hits. Cars shouldn’t either. Now if they really are 10 times safer. And have 10 times fewer crashes, then their insurance costs will be 10% and they’re gonna win that way.


Fred: There’s that, but that, that brings up a question. I, you guys are out there talking about this and we’re really glad that you do because I think it’s a duty of care is in in relation to AVs is a hugely important subject. But it’s only you two guys and maybe us on the alternate Wednesdays, but,

Philip Koopman: and we appreciate, we appreciate you doing that.

By the way,

Fred: why aren’t the insurance companies screaming about this? This seems it seems like this is an issue that goes to the actual core of the insurance liability business. And what Michael brought up a couple weeks ago is that they’re showing that the money they’re bringing in for premiums right now is no longer covering their outflow.

For liability damages isn’t that what you came up with, Michael? Ha. Have you guys had any conversations with insurance companies or any speculation on why they seem to be asleep at the switch?

William H. Widen: I would say that the insurance companies, I think, are taking a standoff position right now. They’re gonna adjust premiums if they, if their current loss experiences that they’re able to pay within a frame. I don’t think they’re terribly worried about the precision of, did you improve, did you not improve?

I suspect if you looked at the rates that you’d actually pay more insurance for things like a Tesla the area of. Of concern that I’ve heard raised in hearings is mostly by trial lawyers who push a duty of care because they understand that otherwise you won’t be able to get recoveries for small accidents.

Philip Koopman: Yeah. If you have to do product defects, the entire judicial system will fall apart under the weight of those cases. In all aspects. There’s not enough experts, there’s not enough judges, there’s not enough lawyers who know the technology. It’s just not gonna work. So the trial lawyers, my understanding is they like duty of care because it takes a.

A case that doesn’t have to have a lot of technical expertise and puts it in a venue that everyone knows how to deal with instead of gratuitously introducing technical considerations where it’s truly not that relevant to the outcome. If you ran a red light, you’re in a red light. Done.

William H. Widen: Yeah. That’s the real reason to dive and the loss have to figure it out. The loss experience in the courts for Tesla is that, I don’t wanna say without exception, but it seems close to without exception they’ve avoided liability. There’s one they

Philip Koopman: blame the driver. They saw the narrative, Hey, guess what?

The Realtel customer is cosplaying being a tester. So even though our thing does the wrong thing, at the worst time they were supposed to, our cosplaying test driver was supposed to catch it, so it’s their fault. And the jury’s apparently just eating that up. Yeah, that’s

William H. Widen: the trap. Yeah, the bigger problem is gonna come, I suspect, when you start having accidents with heavy trucks.

I never

Philip Koopman: wanna go on the road again. So from the insurance industry point of view, to get back to Fred’s question, what I’ve seen is there aren’t that many vehicles on the road compared to human driven cars. It’s just not a lot of vehicles. We’re not talking Teslas, we’re talking hardcore robot taxis and robot trucks now.

And what the dynamic I saw was insurance companies getting in line to buy market share, waiting for the market to increase. And they’ll say things like, and this is this. I’ve seen this publicly on, on discussions at, in conferences. They’ll say we have a $10 million policy limit. And so our downside is limited.

The company eats the rest and there aren’t just, aren’t that many cars. So they, for them 10 million, taking 10, putting 10 million on the betting table to see what happens. To buy part of a huge lucrative market is a reasonable business decision. ’cause they’re. They’re all about the odds.

They’re not actually, insurance doesn’t actually get you safety. It puts pressure for some safety. But just ’cause you can buy insurance does not mean you’re as safe as normal folks would want you to be.

Fred: So is the legislative problem just if the right person has been killed yet?

William H. Widen: Yes. I would say that’s a big part of it.

Philip Koopman: And to put that in context, I’m not gonna disagree with that. The reason we have regulatory agencies or product bans is ’cause the wrong person was killed. It always comes back to that or wrong set of people were killed or the wrong optics. It always comes back to that look at all the regulatory and safety agencies or building codes.

I, I teach a class where we, where I make my students do short stories of historical things and there’s we got commercial building codes because of the great Molasses flood. Where a huge tank of molasses broke loose. And a bunch of people were hit by scalding molasses. And the outcome was we had commercial building codes.

And we just go after story about, this is why we have this regulation, this is why we have this regulation. So it should be no surprise that we have a new technology and we’re in the U.S where it’s. It’s go do what you want and it’ll, it may become illegal later, but it’s not illegal if we didn’t say anything different than Europe.

It usually takes the wrong sort of mishap to institute the regulation, not just the few everyday ones.

William H. Widen: In aviation Newt Rockne, the famous football coach was killed. And my understanding of the history is that was a big motivator to get safety regulation for commercial aviation.

Philip Koopman: And we got car rig safety regulations because of the famous Pinto case, then there was the, was it the Peno case or No, it was the book. You got, you guys know that history. You guys unsafe at any

William H. Widen: speed. That’s your history. And a good example of that is,

Michael: Yeah, the Ford Firestone rollovers that were occurring around the year 2000 where, we got a massive law, the Tread Act put in place, because of a big event.

And that tracks with the history of. Auto industry safety concerns. It’s until some large scale recall or large scale defect happens that sheds light on how the industry’s been misbehaving. It’s very difficult to get laws passed to regulate

William H. Widen: their behavior. Are there, if you look,

Anthony: are there other products that ever go out to market that.

They basically use the public as on-witting, Guinea pigs, like I was trying to think. Other examples like the drug industry can’t do that.

Philip Koopman: Food supplements. Okay, look at,

William H. Widen: look at the train. Look at railroads. For a while, the railroads, they had the legal system in their pocket, but they used to kill thousands of people.

Okay. And they avoided regulation. The industry opposed the air brake. They opposed the automatic coupler. They oppose steel rail cars and all of those things ended up getting imposed from the outside because the public finally were outraged at all the accidents. The head-on collisions, the cornfield meets, I think they called them sometimes.

And finally they had to, the public had to do something. They were partly motivated ’cause people were already upset with the railroads because they were price gouging the farmers. At the time, and so you had another reason to regulate them and they too came together. And it wasn’t until the Pennsylvania railroad decided that they could sell safety in the early 19 hundreds that you actually started to get public investigations of accidents because the pensy would tell people what happened.

But it, so the AV industry looks to me like a pre-Pennsylvania. Industry situation almost to a

Philip Koopman: T. industry repeating a hundred years later. Yeah.

Anthony: So when you guys testify in front of Congress and state legislators I’m gonna make the naive assumption that at least a couple of those people have gone through tort classes and they’re familiar with why regulations come about.

Do they understand what you’re saying when you’re saying, Hey, look, there’s, you’re putting, you’re saying the computer’s liable, so there’s no one liable. Or we need regulations because people are going to die or get maimed or injured. Or do they just look at you blankly?

Philip Koopman: Some understand.

Anthony: Would that be the minority or the majority?

Philip Koopman: Some. I’ll give a shout out to Representative Shelley Kloba from Washington who sent me this nice email saying she read my book and I’m just like amazed and flattered that she took this time to do that. So there are some good people out there who really care. Bill, I’m sorry. You were saying, I think, yeah,

William H. Widen: I was gonna say, the other thing that has happened, and I think I wrote this on my own, but maybe Phil, we coded it that one of the things that people do in their own mind is they wanna create a sense of an emergency.

And if you have an emergency, then people understand why you might suspend the rules. Okay.

Philip Koopman: And so part of the narrative is Urgent. Urgent bill, do you mean because safety is urgent, registered trademark, or trademark at least, right? I don’t, it could be that actually, it actually is a slogan of one of the companies.

Safety is urgent, right?

William H. Widen: And so the idea is we have a crisis on our highways. We’re losing 40,000 people a year. That’s an emergency we have to act. And the second cause of emergency is we’re gonna lose to China. And boy, that would be terrible. And so the combination is, yeah, we hear you on the tort stuff and everything, but come on guys, this is an emergency.

We have to act. People are dying, we’ve gotta save lives and we’ve gotta beat China. And they, in their own mind, either maybe for good reasons or Ill use that to justify why it is that they can cut corners on safety. That’s at least what some of them have said, I’ve heard them say.

Philip Koopman: Michael, do you think China will beat us?

Michael: No, I

think that’s a complete invention. Now they might beat us by hacking into all of our systems. But that’s not what the whole China scaremongering is about. If you listen to folks from Capitol Hill, they’re worried about money, they’re worried about competition. And, that’s a,

it’s continuing to repeat itself. Every time Congress puts out a bill, you’ve got this contingent of folks who seem to think that we’re in this. All high stakes battle with China over the future of autonomous vehicles, and if we lose, the American industry is doomed forever. I’ve always found that to be facetious and it’s, I don’t understand where they’re getting it from, but apparently it is a pretty good motivator to get votes for their bills from racism, right?

William H. Widen: One. One risk that I think we haven’t really touched on, one risk that we’re seeing with crews in the accident is that public opinion can turn against a technology in a way that actually might not be beneficial for the country. Okay. And one of the things that has concerned me greatly is the way that we test automated vehicles.

I am very concerned that a company would disproportionately test in neighborhoods of persistent poverty or neighborhoods that have been historically disadvantaged, and that you would have a high-profile incident where a protected class of person or persons were killed, and then it would come out.

That? Yeah. We were testing in the low-income neighborhoods because if we kill somebody, our judgment is less. Okay. Imagine the kind of headlines and social strife and backlash against the industry that kind of accident could create. And so from my perspective, one of the things I’ve been pushing on the side is that when there’s testing a state, it’s a no-brainer to have a regulation that says you have to have an equitable testing plan that assures that you’re distributing the risk of testing throughout your community to stop an allegation that.

Your testing was in a way that could harm a protected class, and nobody is interested in that and yet it seems to me very protective of the industry do something. So we don’t have a high-profile social event over autonomous vehicle companies. They’re already on the margin with the lying, apparently lying that Cruise did with their accident.

Philip Koopman: I’ll say it,

Anthony: Cruise was lying ’cause I’m not a lawyer. I don’t care. They were lying.

Philip Koopman: To be clear this is not hypothetical. An enterprising reporter that I chatted with while he was prepping this. I. Who did a look at all the crashes in San Francisco and found that a double-digit percentage were in or near the Tenderloin district, which is a, a famously economically and socially disadvantaged area.

And it was even worse because that’s one of the most active emergency response areas in the country. And it should be no real surprise that goes together because a lot of the responses are for drug overdoses, things like this is my understanding. And so the Tenderloin was really bearing a disproportionate brunt of the various incidents reported by police and fire departments and crashes.

A lot of the bad stuff was happening in the Tenderloin, which is exactly what Bill’s concerned about. That’s not hypothetical that played out in California.

Anthony: Surprise to me is that like Waymo, which is Google, they’re based in Mountain View. There’s no RoboTaxi service in Mountain View.

There’s no RoboTaxi service in Sunnyvale or Palo, Alto or Atherton or where any of these PE where the executives live, where the companies are headquartered.

Philip Koopman: Didn’t they apply for that though? Did they get it or no, that’s still up in the

Michael: air. They were also applying to, I believe, travel at speeds of 60 miles per hour over, which I

Philip Koopman: think is.

They were playing for Los Angeles and they were playing for highway speeds. ’cause they want to get to the airport. That’s right. Oh, I did not. They want to get the airport not, at highway speeds. Not at surface roads and in Phoenix, I think they are. They started or they’re about to start highway speeds to the airport in Phoenix, I think.

I’m not sure

Michael: exactly. That’s where I think things get even really interesting too, in other ’cause there’s less time to deal with crash. I know that the driving is probably not as difficult as it

William H. Widen: is in a city.

Philip Koopman: It’s usually not as difficult except when it’s not. Things go south. But also the other thing is pedestrian fatalities are dramatically in increased at higher speeds above 20 miles an hour, 25 miles an hour.

The Vision Zero folks will, show you this graph that shows the lethality goes up dramatically above city speeds. So that’s a real concern. Yeah. That

Michael: was one reason when, cruise and these other manufacturers tout how many millions of Dr. Miles they’ve driven. It’s. Somewhat meaningless from the respect that they’re driving them all at twenty-five miles per


Philip Koopman: under Right, where there were lethality is expected to be low even if you do bad things. And, but just so we’re complete about that, if you take out the drunks, I dunno, Michael, can we use 250 million miles between fatality if you get rid of the drunks and the bad actors? That’s a, it’s, it would remarkably be hard to get that number.

How get that number? It’s hard. I think

Michael: Fred and I have tried to figure out numbers like that and it’s

Philip Koopman: really hard, but let’s ballpark it at two-fifty. ’cause it’s a little below a hundred depending on, on, on miles per crash versus, versus mile per fatality, but a little under a hundred, including all the drugs.

And let’s call it two-fifty without all the bad actors. So two-fifty million miles, they have seven two-hundred forty-three million miles to go before we get ballpark. 70% statistical confidence.

William H. Widen: Yeah, but that’s not even right, because the systems keep changing. I. That’s correct. Every time they do an over the air update, it should invalidate the miles driven under the prior system.

Philip Koopman: And you can see what happens. That Bill hangs out with a safety engineer and I hang out with a lawyer and we start doing each other’s talking points. But that’s, thanks, bill. You’re absolutely right.

Anthony: But that’s a valid question that people don’t talk about. So Bill, you mentioned they’re saying people, this is an emergency and there’s 40,000 deaths a year.

We’ve talked about that a lot on this program, but what you, it sounded like you were saying is that legislators saying we have to act. And so our solution is get rid of the humans.

William H. Widen: The solution is that AV technology is so important that it merits loose or no regulation,

Anthony: but it’s almost like witchcraft is so important ‘

Philip Koopman: cause it.

But I’ll,

William H. Widen: I’ll give you an example from the railroads. Okay. The railroads, the judges at the time were very worried that regulation through tort liability would hamper the railroads. And the railroads were doing a very useful thing, although they weren’t paying their freight by knitting the whole country together as an economic unit.

And the legal system developed a series of crazy rules under a doctrine of proximate cause. That would do things like the sparks from a rail lit a house on fire near the railroad track. Okay? Then a second house caught fire and burned. They would say, oh no, the second house wasn’t caused. It was only the first house that was caused because we’re going to so limit what counts as causation, that the liability of the company for an accident will be reduced.

And that was just a conscious thing that happened in the legal system. The judges did it, and I think it was wrong, but it’s not unheard of that the legal system in order to promote a technology either through the judges or the regulation, lets things go. Then the question is, are you able to export the costs onto third parties or are you forced to internalize the costs?

And they’re saying, we don’t wanna internalize the costs because we wanna promote the development of the product. That’s why they don’t want liability, or they only want product liability ’cause they want a functional liability shield. I think there’s an

Fred: argument that the internet went through this same evolution, twenty-five years ago when the federal government basically said, you can do whatever the hell you want with no consequences because this is important.

And of course, when that happened, those regulations became cast in bronze and we’re still living with those today and seeing the consequences of that. I suspect that the AV industries is. Pushing hard on that model to say look, we’re in the same position as the internet was back in, in-nineteen-ninety.

We really need to, push this forward because of all the lives we’re gonna save and the jillions of dollars people are gonna make, and yada, yada, yada. But the sub agenda is that they’re really trying to get the rules in place before people have a full appreciation for what those rules really mean.

And, build a business around those later on, it becomes very hard to change any of those rules. Do you think that’s a tactic that they’re consciously employing or is this just something that’s accidentally


Philip Koopman: They’re absolutely doing this on purpose state by state. Bill and I have testified in several states they have a model bill that has all the worst things you can imagine.

They’re ramming it through all the states. They can ’cause it’s much harder to undo. Now. Now there’s a difference though, between autonomous vehicles in the internet. The internet doesn’t involve thousand-pound machines going down city streets accelerating into pedestrians. Like we’ve seen some of these vehicles do.

Anthony: So when you guys show up, are you the only two showing up, pointing out these safety implications?

William H. Widen: In Washington, the last hearing that I testified at and Phil testified at, there were quite a number of people from different avenues that testified against the bill is Washington. The trial lawyers is in Washington state.

The trial lawyers had people that testified against it. For one, there were obviously you sometimes get unions that are concerned. They have a off to the side concern because they’re interested in union jobs being displaced by vehicle autonomy. Those are two that we’ve seen multiple times show up.

And then there’s other community-based organizations. That are worried about hindering first responders. There were testimony against what that would do to first responders in Washington cities. And so I would say probably Phil, that it was maybe 70 30 of opposition to the bill in Washington, which is the first time we’ve seen that.

Philip Koopman: Yeah. That the tenor has changed dramatically since last summer. Was the cruise accident really? Yeah, the cruise mishap had a whole lot to do with, but not even just the cruise mishap. The whole mess we saw un play out in San Francisco certainly got people in a negative mood.

And then the pedestrian dragging mishap was just the thing that cinched it. I take it

Anthony: all back. Kyle, thank you so much. Sorry.

Michael: One of the, one of the things that we think about, or I think about a lot of years, they keep saying, oh, we’re gonna save all of these lives by maybe, yeah.

Philip Koopman: They say we’re gonna save them, but maybe we don’t know how that’s gonna turn out. And the way they’re

Michael: doing it is by putting them in cars that go the speed limits. And that don’t have a drunk driver or that these other things that we have,

Philip Koopman: you could do that without robots, Michael, you

Michael: don’t need that.

Yeah. We have an active intelligence, speed assistance as opposed to passive, which, keeps cars at the speed limit, but Americans are completely opposed to that. It’s a very hard thing to get through politically because people somehow think they have a freedom or a constitutional right to speed and do whatever they want.

But that’s not the case. But why do they think that people are want to, gonna want to get into their slow vehicles when they have the option to speed in a, in an automobile? It all doesn’t

really make sense to me. I, I.

Philip Koopman: Let me give you two responses. One is if you had a fixed amount of money to spend on road safety, robotaxes are probably not the best bank for buck.

But it’s complicated ’cause it’s investor money rather than public money. And it’s a complicated discussion but claiming that robotaxes are the only solution, it’s just ridiculous. ’cause other countries have shown us that, that’s not really the case. There’s lots of stuff you can do except people have to want to put up with it.

William H. Widen: Yeah. One of the things that they think, this is a specific industry strategy from some of the countries. They want to turn the interior of the car into something more like your living room so that you can, it’ll make you an espresso. You can listen to music, you can watch a movie who knows, right?

And so it’ll be a pleasant experience and you won’t mind if it takes a little longer. To get where you’re going. Of course, part of the problem of they, that raises all kinds of cyber security things, which I’ll just mention something that worries me, right? Nvidia has a chip that they already have one, but there’s a new one coming out that they advertise as being able to run both your infotainment and your critical navigation.

And the idea is that it can do both on the same chip and you use one chip and you’ll save money. Okay. Now that may be true, but one of the things that people worry about when you’re running two systems together on the same hardware, it’s the same thing people worried about in the airplanes, when people could use the info system on the airplane to somehow jump and hack into the navigation.

I. Right. There’s questions as to whether that really happened or not, but it was certainly a concern, right? And you can also, you could hack into the infotainment system if you wanted to. And if you got it to run hot, because you put a very intensive video game on a loop, the heat could affect the operation of the side.

That was for mission-critical stuff. And so people just haven’t thought through what it means to meld a. Travel vehicle with a mobile living room, but they actually said we wanna have a mobile living room. I

Fred: think there’s a, there was a TV documentary about that living room, wasn’t it called the Jetsons?

I think I’ve

William H. Widen: seen that.

Philip Koopman: We, I’ve got a, I’ve got a Roomba, so Rosie’s taking care of, at least the vacuuming part. And that’s about as far as we got.

Fred: Hey, I’ve, I’ve got a question that hasn’t come up yet, so let’s assume duty of care. Has been implemented, right? And it’s been legislated and people do that.

How does cybersecurity fit into that? Because if your car is hacked, um, somebody else has taken over control of your car, does the duty of care extend to a responsibility for the manufacturer to protect against cyber attacker cybersecurity, or is that a wholly


Philip Koopman: discussion if the robot taxi’s driving?

The manufacturer’s gotta be. That’s gotta be where the buck stops for whatever. It doesn’t matter what it is, including cybersecurity has to be ’cause who else is gonna deal with it?

William H. Widen: Yeah. Unless you can,

Michael: one of the things that NHTSA is relying on right now and not acting on the Hyundai and Kia thefts that are occurring everywhere which is a, in my opinion, is a cybersecurity issue.

They are saying, that an intervening criminal act, basically because someone is breaking into the vehicle and using a USB cable to start it up and drive away that intervening criminal act, absolves, Hyundai and Kia of any responsibility under the Safety Act to conduct a recall or to, really fix the problem at all.

So I wonder if the same. The same legal principle would apply even if you have a duty of care as a manufacturer, if you can show that someone from Russia is hacking into the system, does that absolve you of

the of the duty of care?

Philip Koopman: That may toss you back into a product defect.

William H. Widen: Yeah, I think that if if you did not use the best or state-of-the-art methods to secure your system, then that could be considered a product defect.

If you had a system that was easy to hack into, and, but if you had done everything that was possible, I could see, and I would do it by legislation, I could see, an excuse if despite all your best efforts to stop a cyber attack, one happened and you perhaps could treat that like the person who at the last minute jumps in front of the AV and gets hurt.

We don’t expect the AV to be able to counter every single action. And in that case, there would’ve been an action by a third party doing a malicious thing. You’re gonna probably end up having some. Some excuses, the law won’t tolerate it if you don’t have some excuses. And in fact, I’d be perfectly fine if they had state-of-the-art measures that they’d implemented.

I’d give ’em a pass on that.

Anthony: Some of this sounds like what we were talking before, even. Whereas if I think Phil, you gave this example of, oh, if the robot taxi runs the red light, oh, is it the software? Do we look and see what happened? It doesn’t matter. The computer driver made this mistake.

Yeah, the computer driver’s at fault. So again, if the computer driver’s hacked. It’s like being in a cab with a human cab driver and them being on drugs or something like that and them being hacked and doing that. I think it seems like it’s a distraction to be like, oh, was it the cybersecurity issue or was it something, some regression error in the software?

It doesn’t matter. It,

Philip Koopman: it sounds well it’s a little tricky because let’s say you’re, by the way to, to. So Michael’s earlier question, why do people want this? What they want is their own private limousine and the congestion. And we, it’s beyond a discussion today, but there’s gonna be a lot of negative consequences on cities if everyone gets their own private limousine, which what the car companies are trying to sell.

But the reason I go there is, uh, Anthony, what you’re saying makes sense, but let’s consider it was a human-driven taxi and the human-driven taxi driver decides to go crazy. That’s not the car company’s fault, right? So it’s I could see this going both ways. It’s a little tricky, but as Bill said, if the companies are, there’s a standard, there’s now a cybersecurity standard from ISO and SAE.

That, that number I haven’t memorized well enough. I’ll get it wrong if I try it, but it’s at twenty-one something, right? Four thirty-four maybe. Some number like that. And the, if you followed that standard and you followed industry practices it’s hard to say that you were negligent.

Okay. What’s going on?

William H. Widen: There’s one thing I would say. I commented on the NIST proposed framework 2.0 for cybersecurity, which I think needed a lot of help, particularly on the corporate governance side because a lot of the measures that a company needs to take for its products to be secure really have to come from the top down.

And the NIST is getting, getting some religion because between framework one and framework two, they actually added a governance function, and so they’re starting to recognize that you really need good corporate governance and oversight in order to develop an appropriate cybersecurity strategy.

Anthony: But that’s those damn regulators. They’re just slowing down innovation.

William H. Widen: But see they, they do say if you had to comply with the NIST standard, that might be a defense. I guess what I’m saying is I’m not real happy with the current NIST standard. So if you complied with it, I don’t know how great that would be.

Philip Koopman: Although they’re not. But if they got, they’re not complaining with anything right now. Or at least they won’t say they are. And I know the reality is not black and white, but it’s not. It’s not where you want it to be. All right.

Anthony: Gentlemen, I think we should wrap up here.

I don’t wanna take all of your time. I think we possibly could and we could sit here for hours and hours but we’d love to definitely have the two of you back. I am. Personally, I have to sit on a plane for 11 hours and I’m gonna see if I can get from the entertainment system to the navigation system.

No, I’m kidding. Please let me onto that plane. So thank you again to William Wyden, professor at University of Miami School of Law and Philip Coleman, associate Professor at Carnegie Mellon University. We’ve got a bunch of links in the descriptions. Thank you to Fred. Thank you to Michael. Thank you, listener.

Bye-Bye. Thanks,

Philip Koopman: Anthony. Thanks for having us.

William H. Widen: Thank you. Thank you,

Fred: Anthony. Bye-Bye

Philip Koopman: For more information, visit