Start your own robo-taxi company with Phil Koopman
We continue our discuss with Phil Koopman from Carnegie Mellon University. What is safety? How easy is it start your own robo-taxi company? Why Waymo’s safety claims are short about a billion miles of data and more.
Like guests like Phil? Donate so we can continue getting more great guests.
This weeks links:
Subscribe using your favorite podcast service:
Transcript
note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.
[00:00:00] Introduction and Welcome
[00:00:00] Anthony: You’re listening to There Auto Be A Law, the center for auto safety podcast with executive director, Michael Brooks, chief engineer, Fred Perkins, and hosted by me, Anthony Cimino for over 50 years. The center for auto safety has worked to make cars safer.
Hey, listeners, welcome back to another episode of There Auto Be A Law this week. We continue our conversation with. Professor at Carnegie Mellon University, Phil Koopman. Welcome back, Phil. Okay. Thanks. Happy to be here. And for our listeners, we’re just playing a game. We, we had to, we didn’t stop recording from last week.
There you go. So if there’s any, if there’s any giant news, we’ve missed it. Sorry.
[00:00:49] Waymo’s Safety Claims
[00:00:49] Anthony: So we were going to continue with unregulated cyber cabs, but before we even get into that, I see a lot we’ve pointed out over the last six weeks, say, [00:01:00] Waymo has been on a real big PR kick. Putting out a bunch of articles, as much as possible, talking about how we are safer than every human possible.
We’re the safest thing, and they have this Swiss reinsurance company saying, yes, they are safer. We had on the show a couple weeks back, Missy Cummings, who pointed out, all that PR stuff’s really just for the Alphabet executives. It’s not for people who work in the industry. It’s not for consumers. It’s not for regulators.
It’s for the people at Alphabet to keep the money spigot on. The average person will read these things and go okay, well, humans bad. Waymo good. Phil, why are humans so bad? I mean, well, what’s this game they’re playing with, with these stats?
[00:01:50] Phil Koopman: All right. So there’s two games. Well, there’s three games. One is insurance.
Insurance and risk management is not safety, right? They’re different things, but I think we’re going to come back to that. [00:02:00] Okay. One is that that they don’t have enough, no miles to know how it’s going to turn out. So if you go to, last time I checked, it’s been a few weeks at this point, you go to the Waymo safety page, it says, in effect, and I’ve, I’ve talked to journalists, had them read it and confirm that they interpret it, it says we’re already saving lives.
The, the sentence is a little more complex and compound, but it, the takeaway is clearly intended to be we are already saving lives. But if you go to their technical publications, You know, the peer reviewed technical publications, it says there’s not enough data to know if we’re saving lives. The technical publication is correct.
There’s not enough data to know if they’re saving lives. But when they say safer, they say, well, yeah, we’re having fewer small crashes, but, but that’s not the headline. The headline is saving lives and they don’t know. So we don’t know how this is going to turn out. As simple as that. You know, safer, safer as in we’re having fewer minor crashes.
Well, what do you mean by safe? Saving lives. Well, guess what? There’s a disconnect there. And, and I can make it really simple numbers.
[00:02:55] Statistical Confidence in AV Safety
[00:02:55] Phil Koopman: Last I checked, they were at 25 or 30 million miles or something like that. Which is getting to be pretty impressive. [00:03:00] But an average fatality rate is more like 100 million miles.
So they’re only a third of the way to the first expected fatality. When they get 100 million miles, They don’t have low confidence. It’s like 60 or 70 percent confidence. They need like 300 million miles to have 95 percent confidence and a fatality the next day throws that all out the window. And by the way, going 300 million miles without a fatality is improbable, even if they are acceptably safe.
So we’re really not going to know for about a billion miles. That’s it. That’s it. It’s simple math. You’re not going to know for a billion miles how it turns out. And if they said, we don’t know, but we’re looking good, but we don’t have the final answer, that would be honest. But that’s not what they’re saying.
But
[00:03:40] Fred: isn’t that just a lower limit? I mean, they actually have to do much more because the version of software they’re using to run The vehicles is constantly changing, and unless you’ve got a very, very disciplined and very hard to understand technique for consolidating all those different statistical bases you’re really, [00:04:00] it really gets even squishier.
It really has to be much, much higher to develop that kind of statistical confidence. So for
[00:04:08] Phil Koopman: the 300 million miles without a fatality, you’re absolutely right, Fred. But it, you know, theit could be that they just got lucky, and yesterday they made a change thatthat degraded them to be one tenth as safe as a human, there’s no way to know.
I would, so, so you’re right, the, the 300 million miles with no fatality to prove 95 percent confidence is under extremely unrealistic and idealistic assumptions that are definitely not true. I would say if you have a billion miles, and so now you’ve seen a handful or two of fatalities across a billion miles, which, which, you know, hey, human drivers have that too, that’s just, that’s just how the roads are.
But if you can, if you have enough adverse outcomes, you can build statistical confidence not on the projection of how, how often, but just, you know, we’ve had 10, we’ve had 20, we’ve had 50 mishaps and, and we’ve been changing software all along, you could make an [00:05:00] argument that that 10 or 20 or 50 mishaps bakes in their change control processes and everything else.
You could make that argument, it might be a credible argument. But we are so far from having 100 billion miles where you can start to make those kind of arguments that, that, that’s all moot. You know, the, the answer is nobody knows how safe these things are going to be. And, and even the, even the math that gets them to the earliest answer is unrealistically optimistic.
So
[00:05:25] Fred: let me, let me divert just a little bit.
[00:05:28] SubSafe Program and Lessons for AVs
[00:05:28] Fred: I’ve looked into nuclear submarines and you’ve got experience in nuclear submarines. There’ve been about 250 built since the Thresher went down. Okay. And the institution of the SubSafe program and SubSafe program does a great job of basically quantifying. Risk at the component level and forcing people to use TRLs for appropriate risk management and then assembling those into an overall vehicle, which has again a TRL level associated with it because all of the [00:06:00] components do.
And they’re running an extensive series of sea trials, and I think that most people would argue that the submarines are pretty safe, despite, you know, they’re faring around really explosive stuff. And the evidence so far is that even on this very small statistical base of 250 submarines that they’re.
Well, certainly safe enough to intend the purpose because the sailors keep going on and using it. So, is there some equivalent approach that could be used or that could be recommended for vehicles to achieve the same level of perceived safety?
[00:06:38] Phil Koopman: So, Fred, you left out the most important part, which is the maintenance and operational practices.
Sorry, my bad. Okay. So, that matters. Okay. So, to Let me back up, sub safe. The USS Thresher went down, was submerged, and basically imploded with the loss of all hands back well before [00:07:00] I was in the submarine service. I served on a submarine that was a sister ship. It was the same exact design as USS Thresher and, and simply based on the, the declassified info that, that I’ve heard there’s an, by the way, there’s an excellent video that came out a couple of years ago when they declassified the, the mishap.
I have a webpage that is software safety less stories that you should know. You can put a pointer out to it and there’s a, the Thresher points to a video by, by my old executive officer on my boat where he, he actually. got. He FOIA’d all the documents and went through and explained it all. But the, so it’s well worth listening to.
And the, the bottom line was that they had brittle pipes because they were brazing, which is kind of like soldering instead of welding. And the pipe, the pipe burst, but they also had single valve isolation so that if a valve failed, you had a problem. And they had a bunch of just, a bunch of engineering techniques that were designed for boats that only went three, four hundred feet [00:08:00] deep, and they were going much deeper.
I think the, the, they admit to seven or eight or nine hundred feet, and the Wikipedia page says thirteen hundred feet. That’s really deep. There’s a lot of, a lot of pressure there. Okay, it’s, it’s, there’s a lot of pressure that, that deep. And, and so the boats just weren’t built for the pressure. They, they legacied designing construction techniques for shallower depths and they didn’t work on deep depths.
So they went back and refitted all the boats with with changing how they did the joints and putting in a second layer of of isolation valves in case the valve failed and all, and a bunch of procedures on how you run the reactor plant to make sure you can use the reactor to push you up to the surface even if your, your emergency air, emergency air ballast Tank blows don’t work, and a bunch of other things.
That was, and that takes you through the C trials you mentioned, Fred. But the other thing was, any time you touched one of those safety critical valves, you were pulling out a procedure, only certified mechanics looked [00:09:00] at it. There had to be someone reading and checking. And for a while, I was the damage control officer.
I’d come by and check and make sure they were following the procedures. You know, you don’t, you don’t self, you don’t just take. take something like that. It’s like taking your car down to Joe’s Bait Shop and, and submarine repair stand. You know, you don’t do that. It, it might, it’s, it’s, it’s comparable to what they do in aviation, that you have to be a certified mechanic to, to fix these things, and ends up the same place.
And they do have an admiral safety, safety record, but it’s not just that. They, It is also a very, very safety conscious mindset, extremely safety conscious and, and again, in aviation, they, they try and get there, but SubSafe is sort of the leader in all that kind of stuff. It’s an impressive program, but you have to really be prepared to spend a lot of time, attention, training organizational effort.
to get there. It’s, it’s a really special thing. It’s very hard to do that when you don’t have that particular culture and resources. And, and, you know, by the way, the people maintaining this go [00:10:00] down, go down in the submarine and they like, like we used to say, you know, you, you button up, you go down a few hundred feet.
And if you want to freak out a visitor, you just look up and say, you know, there’s a lot of water up there.
[00:10:14] Fred: I watched the World War II movies like a lot of people and those movies. Absolutely terrifying. Absolutely terrifying to be in a submarine with depth charges dropping on you. I can’t imagine.
[00:10:27] Phil Koopman: Never, never, never had that adventure, but Hunt for October is a pretty realistic movie and I’ll leave it at that.
[00:10:32] Anthony: It’s actually a realistic movie and I want to go watch it now again.
[00:10:36] Phil Koopman: I, I, one of the guys on it was a shit made of mine. We spent hundreds of hours taking turns at a periscope. Yeah, it’s pretty realistic. No, no, sure. I mean, not the plot, obviously, Sean Connery. The, the, the not, the English accent, you know, not, not that stuff, but the, the, the, not the plot, but rather the grit and the details and how people act and what they say and, you know.
It’s very realistic.
[00:10:58] Public Perception of AV Safety
[00:10:58] Michael: So, so even if it’s kind of bringing us [00:11:00] back to cars here. So even if we have this basis of statistical information that says these cars are safe in quotes Does that matter ultimately towards the real issue, which is whether these cars are going to be accepted by normal humans?
I mean, is someone more likely to look at the statistics and that assuage their, their fears and, and, and ignore, right, right. It doesn’t seem, it seems like they’d be looking at, you know, the one crash that happened that year, or, you know, that, that would, that would be much more persuasive to the average human as to whether these vehicles are safe.
[00:11:44] Phil Koopman: So, so there’s two paths here, Michael. And so this is a choose your adventure turning point. Okay. One path is if you want to be technically safe, there’s a whole bunch more you need to do beyond statistical safety, things like negligence and risk transfer. Right. So you have to, [00:12:00] you have to get all that right.
And the other thing is. Ordinary folks don’t know, and as a practical matter, don’t care about any of that, right? They care about, about other things. So let, let, let’s do the second one first, and we’ll, we can circle back to the first one if we want to. But the second one I think is more, more here and now, sort of in our face, right?
Because most of the other things of safety, you’re not, until you’re at a billion miles or so, you’re really not going to know how they turned out anyway. You know, it’s all theoretical. What I think is that when you see something in the news, You’re gonna ask yourself. I’m a human driver. Would I have made that mistake?
And the answer is no. You’re gonna think the Robotaxi is a bad driver. I think, I think it’s as simple as that. Now, that is a terrifying statement. Let me unpack why it’s terrifying, but it’s human nature. You know, oh a, you know, a cruise Robotaxi had a woman underneath it. Right? And we, that’s a whole different story, I’m not going to go into details, but a woman trapped underneath it, [00:13:00] it was at a stop, it decided, decided on its own to start driving with her under the car, despite ample bumps and, and lurches, and, and knowing it just hit her and dragged her down the road, and, and then it stopped because its wheels were spinning too much, and eventually stopped 20 feet later, and even the cruise commissioned external report said no human driver would have done that.
And so the, the scary thing about that mishap, and there are a bunch of issues about not being straight with the regulators and allegedly and all those sorts of things, but, but from a, do people think things are safe? If you say, what do you mean? You knew, you just hit someone and there was a good reason to believe they were under the car because your car wasn’t even level, you’re, you know, up on top of them with one wheel and you decide to drive down the street.
What are you, are you kidding me, right? Right. And. Saying, but, but, but we’re seven times statistically safer for slow severity mishaps, so we’re safe, you have to give us a pass on this one. [00:14:00] That’s not going to work. And every time they see a computer driver doing something that a human never would have done, that’s what they’re going to focus on.
And I think that’s human nature. Trying to deny that is just not going to make it go away.
[00:14:14] Anthony: Right. So, Phil, the question I have for you is can you define what safety is? Can you define what safety means? It’s not a
[00:14:22] Phil Koopman: Oh, that’s, that’s the, that’s actually the other branch with the seven things. So, but there’s, there’s the technical definition, which we can get to, but there’s the, let me, let me finish up with this public perception vision of safety, right?
They’re going to, they’re going to say on a crash by crash basis, is that something I, as a typical human driver, would have done, and the answer is no, then they’re going to have a safety concern. Now there’s some problems. When I said that was terrifying, why? Well, most drivers think they’re above average, right?
We all know this. And that’s actually, I looked up at one point a scientific paper, they have data on it. Most people think they’re above average. [00:15:00] So I didn’t say safer than average, I said safer than they think they are, part one. In part two, Monday morning quarterbacking is a thing. So, it’s not only going to be what they would have done, it’s going to be what they thought they would have done, and people are very optimistic about, you know, see the, see last week where he talked about how people have trouble performing in a crisis because your, your brain’s just fritzing out on you, right?
They’re not going to be in the crisis. They’re going to say, well, what do you mean you did this? Of course you’re not supposed to. So, of course you’re supposed to know that the emergency release handle is underneath the, the, the mat, and why didn’t you pull it up? It’s like, well. Well, you’ve obviously never been in a crisis if you say that, right?
But in these again from last week, but for here, you know, if you, if the robotaxi does something a human thinks in a calm moment reading social media that they, knowing that they’re a narrowly perfect driver, even though they’re not, if they think they would have gotten it right, then the robotaxi company gets hit, gets dinged for, for not being safe.
I’m not saying that’s a fair standard, it’s [00:16:00] not an objectively reasonable engineering standard. But if you try and deny how people are going to behave, why would you be surprised that they behave that way? And you should, why would you be surprised that people behave this way? It makes no sense that this is how people are going to behave.
And if you want to succeed, you have to address that. You can’t just pretend, well, the, you can’t say, well, those people are irrational, so I’m just going to ignore them.
[00:16:26] NTSB and Root Cause Analysis
[00:16:26] Fred: Sorry, isn’t that exactly the approach that’s behind the NTSB investigation of aviation accidents? They take the approach that any aviation accident that kills people is unusual and unacceptable.
And, you know, and then they do forensic analysis and that has really contributed to a very safe aviation system isn’t don’t we really need to kind of switch the appreciation of statistics around and say, we’re not really interested in the deaths per 100, 000, 000 miles. We’re really interested in the deaths today.[00:17:00]
And we want to understand those. Thoroughly, so that we can make sure that systematic changes have been made to keep them out of the development system.
[00:17:12] Phil Koopman: Well, so this is just my opinion but I think the self driving car companies are making a huge mistake and have been making it for years, selling on reducing fatality rates.
Because there’s only downside, there’s no upside. The upside is wait a billion miles and see how it turns out. The downside is every single headline that shows that people say, I would never have made that mistake. And so selling on saving lives is the stupidest thing they could possibly do, but they keep double, tripling, quadrupling down on it.
What you’re saying, Fred, I think is the smart move to say, Hey, you know what? We’re responsible road users and we’re going to track the metrics, but it’s going to be a long time before an infatality. So we’re going to track the metrics and look, check it out. Our metrics for minor crashes are looking good.
So, so, you know, there’s some basis for trust, but their bigger basis for trust is [00:18:00] when something bad happens, we’re transparent about, yeah, we were 20 percent responsible as opposed, well, the other guy’s 50%, so we’re 0%, right? We were 20 percent responsible and. Here’s why it went wrong, and here’s what we’re doing to fix it, and, and here’s why you should believe us when you, when we say it’ll never happen again, instead of, oh, that’ll never happen again, because go away, stop bothering us, right?
And so they could build a, a dialogue of public trust by being more transparent about not only the shortcomings, but also the fixes, so we believe it. Because what, what does NTSB do? They say, well, they have a mandate, I think they’re required to investigate or at least consider every evasion crash. The automotive crashes are discretionary, but what do they do?
They take a look and they say, is there a lesson to be learned here? So they take car crashes because it’s a lesson, not because it’s a car crash, right? What’s different about this crash than all the other crashes? And if there is nothing different, they’ll, they either, they won’t do much or they’ll say, you know, see lessons one, you know, 73 and 54 from last [00:19:00] year, guess what?
Here they are again. But if they do an investigation, The, they find a root cause and some people say, well, they blame the pilot and, and they’re not reading the reports closely enough because the root cause isn’t the point. The point is what’s the recommendation. If you say, and I’ll, I’ll just make one up that applies generally, you know, the the driver was not paying attention because they were watching a movie.
The recommendation is to put in a driver monitor system. So they’re not actually blaming the driver, they’re blaming the company for not having an adequate driver monitoring system by saying that’s what you should be doing. So we talked before about, about self driving cars need to enforce their operational limitations, their operational design domain, and they need to enforce driver attention.
So NTSB might blame the driver for, for not paying attention. Blaming a driver doesn’t fix anything. What fixes something is putting in a decent driver monitoring system. So you have to look at what went wrong. And I don’t, I [00:20:00] kind of don’t care about the blame. It’s not about the blame. How do you prevent the next crash?
That’s really the only thing that matters. You have an autonomous
[00:20:05] Anthony: vehicle. How do you blame the driver?
[00:20:07] Phil Koopman: Like, what, what do you? Well, well, they blame the safety driver. Uber ATG blamed the safety driver when they killed that poor pedestrian, right? That’s, or they blame, they blame blaming a bicyclist for hiding behind a bus.
Oh, he hid behind the bus or they blame, you know, they, they always find a way or, or the one that was just most laughable, they blamed the tow truck driver for not putting the pickup he was towing on straight and therefore it was his fault for improperly towing a tow truck. Are you kidding me? You know, human.
And this is, this goes back to the principle. So, so, okay, this is, this is true. There’s a tow truck going down the street with a pickup behind it. Toad did a slightly, slightly unusual angle, unusual presentation. Not one, but two Waymo rabid taxis hit it on the same day, on the same trip. Okay? And they say, well, it’s because this guy was not towing it the way we expected.
Well, how is that his fault? [00:21:00] And, and, And I’m reading and I’m like, a human driver would never have said that tow, that tow truck’s towing something I don’t recognize. So I guess I’m going to hit it, right? I mean, a human driver is just going to find that laughable. Now that’s not quite the reasoning. I’m making a little bit of fun of it, but it’s richly deserved in this case, because they, they should have said, Oh, we didn’t train on that.
We’re going to look at all towed objects of all sorts and make sure we never hit anything being towed again. And we’ve learned our lesson. And no one is hurt, so, you know, fine, but they didn’t do that. They, they’re like, oh, the guy was towing it a weird way, so it’s his fault.
[00:21:31] Fred: Well, the other thing I’ve noticed along this vein is that the NTSB takes, oh, two years between the time of an incident and the time it issues a report, after they’ve gone to the trouble of developing root cause or finding root cause, developing recommendations, all the things that you’ve just mentioned.
And yet I see the AV companies turning around a software update and. No, an hour and a half and saying, well, we fix that problem. Let’s move on. Isn’t there, you know, do you have any [00:22:00] knowledge about any of these companies actually getting down to root cause investigations and developing systematic? Solutions rather than just quick fixes?
[00:22:11] Phil Koopman: Well, that’s tricky because no one really knows if they’re not in. Clearly, if you turn around a software fix in a week, you didn’t do a thorough NTSB type root cause analysis. And maybe that’s okay, but, right? The maybe that’s okay part is if there’s something that was clearly wrong behavior and you find it and you fix it, okay, fine.
You can’t stop there. So in safety, in safety, if there’s a defect, if there’s a defect that gets out and causes a loss event, you’ve got two problems. The first is you have to fix that defect so it doesn’t do it anymore. Okay, fine. That you can do in a week, maybe, with some testing and whatever. The other question is, well, how did that defect get out there?
What’s wrong with our safety engineering process that we let it escape? We have to go back and revisit our safety engineering process, which may go back all the requirements. So, [00:23:00] for the, again, for the cruise or the taxi, it’s easier to talk about them, because they’re not around anymore. It turns out that That that kind of mishap can be fatal to a company, which is another reason companies should pay attention.
They, they had a, a blind spot, so to speak, in that they were not capable of modeling a pedestrian being hit by another vehicle. They saw the other vehicle, they saw the pedestrian, the paths intersected, they did not realize that intersection is an event worthy of, of notice or modeling. It just, just not, literally not a concept, okay?
So, after that, sure, you want to make sure that the car does not, does not, After a crash, you want to make sure the car doesn’t start moving again until a human remote operator checks in to find out what happened. They could say they fixed that pretty quickly, right? But the root cause analysis should find, oh, we don’t model pedestrians being hit by other vehicles, and now we can no longer, now we have to take as credible that That can result of in a person being thrown in front of us, which I’ve actually seen this happen.
This is a thing. It’s not a [00:24:00] one off. It happens all the time. All right. And so now we have to fix our system. So it takes that into account, which is a big engineering effort. Which is what you’re talking about, Fred. So you can do a patch quickly. That’s not the question. The question is, in the background, are they doing the engineering heavy lifting to go after all the things that contributed to needing to have that patch?
And we have, we have no idea, because they all say, Oh, it’s a software patch. It’s fixed. Go away. We’re not going to talk anymore. We don’t know what happens after that. Now, companies like Waymo will say they’re good actors, and they’ll say in general, Oh yeah, we go back to our test track and we do a bunch of scenarios, and we do some modeling and some They say that generally, but you don’t hear them saying that specifically for any particular reason.
The other thing
[00:24:40] Fred: I’ve seen, I’ve noticed, is that several companies now say they have a safety case analysis, and that’s fine, and that, you know, they’ll wash their hands of that. Yeah, ask them what they mean by safety. You have to ask them what they mean by safety. Well, how are you going to evaluate these?
And what’s the independent authority for, you know, passing and, you know, how’s that going to work? [00:25:00] It’s not enough to say we’re going to build the Statue of Liberty. You’ve got to get into the details, saying where it’s going to be and how it’s going to get done and, you know, who’s going to be in charge.
Well,
[00:25:11] Phil Koopman: fortunately there’s a document that addresses all that stuff. It’s called ANSI UL 4600, as you well know, Fred, because we have talks about the, we’re on, on that committee too. And, and what they’ll say is, well, we have a safety case. But what few, not, not none, that’s changing, few of them will say is it’s a UL 4600 conformant safety case.
And if you can’t say that, we have no idea what you mean. Say, well, it sort of does 4600, well, what does that mean? Who knows, right? And one of the things about having conforming UL 4600 safety case is it requires independent assessment and it talks about what independence means. And so if somebody says I have a safety case, ask them, is it UL 4600 conformant?
Yes. It’s a yes, no question. None of this, well, kind of, sort of. Yes, no. And the answer is yes. Then all those questions you asked are, are covered [00:26:00] interesting discussion, how deeply and how rigorously, but, but if the answer is no, you’re kind of done with the discussion. It means they’re just sort of waving their hands and who knows.
It seems like
[00:26:06] Anthony: they’re just lacking a safety culture in the engineering departments of these companies.
[00:26:13] Phil Koopman: Well, there are good people who are trying to really hard to do safety. But can
[00:26:17] Anthony: we read into them, like, before they even get there? Like, you look at a company like Airbus, for example. They’re not just like, hey, we’re just going to throw things out there and try it.
Yeah.
[00:26:28] Phil Koopman: Well, that’s where I’m going. So all these companies have good, well meaning safety people. Some of them are really retreaded the careers, but you can get skills that way. So it’s hard to say, right? They all have good, well meaning safety people. The question is whether they’re allowed to do their job.
Whether they get overridden, you know, if you get to a company where, where according to public documents, you know, Elon Musk makes the final decision on safety and he can override people who say it’s not safe, you know, that’s going to be a problem. And other companies, you know, so then the other companies have independent [00:27:00] safety advisory boards.
And the question to ask is, so you have an independent safety advisory board. Cool. If they say no, do you still ship? Those kinds of questions don’t get asked and don’t get answered, right?
[00:27:11] Anthony: It’s a, it’s concerning. Is there a I don’t know. Before I even ask, is there a way out of this because it seems like it’s primarily going to be driven by capital decision if this happens.
So there’ll be a bad enough crash, something large enough where these guys go, Oh, wait a second. Let’s step back. Our approach is probably wrong. We need to focus more on safety. I’m going to go back to that question I asked you.
[00:27:33] Defining Safety in AVs
[00:27:33] Anthony: Can you tell me what safety is?
[00:27:38] Phil Koopman: Yes. Let me answer the implicit question, then we can dive for a while into the other one. That sadly, our regulatory agencies are mostly created in response to having that one bad crash. Like the, the Grand Canyon crash I think was NTSB and there’s, you know, the, there’s always the one high profile clash usually involving [00:28:00] someone important politically or, or something that trains it was, we, some of the train regulatory reform a hundred years ago was because they, I, if I’m remembering correctly, there’s a church picnic where everyone got, a bunch of people got killed in a, in a, an unsafe rail car, right?
There’s always the, the galvanizing. Adverse news event that causes regulatory reform. And sadly, I’m concerned I’d prefer it weren’t this way, but we’re on a path to waiting for that event to happen for the self driving car industry. Sadly, we’ve seen some events kill individual companies. And it’s a question of whether each company is going to have their event and not survive it, or whether there’s going to be something so bad the whole industry goes down.
I would prefer the industry get its act together and avoid that event. But the, the funding and regulatory, lack of regulatory incentives are sending them down a path that, that it’s going to be company got by company or the bad one bad thing. That’s the path we’re on. I would love to see that change, but.
I haven’t seen a [00:29:00] change yet. So that’s sad, but that’s not just self driving cars. That’s just how technical system safety has always worked and, you know, it can change. The airplane, the air, the aircraft industry got it together. They had FAA and they actually, and NTSB and, and they actually take safety really seriously.
And when we see the crashes or we see the, the Boeing 737 Max fiasco, it’s bad. But what’s remarkable is that it’s the, it, that’s the one thing, you know, it show it, it contrasts with the general safety culture of the rest of the industry, as opposed to, well, of course, that’s just what happens, right? Which is where, what we see in automotive, sadly.
The wonderful world of AVs. Yeah.
[00:29:46] Anthony: So, what does safety mean? I’ve asked this. And the response I get is some lawyer gobbledygook bullshit. So hopefully you not being a lawyer.
[00:29:54] Phil Koopman: Well, all
[00:29:58] Anthony: right. So we’re going to
[00:29:59] Phil Koopman: [00:30:00] stay the heck away from the absence of unreasonable risk. Wait,
[00:30:02] Fred: wait, wait, Michael, we love you. Just wanted to make that clear.
[00:30:09] Phil Koopman: I’m going to stay away from that. So, Michael, you can just head out to the picnic and go pick up a package of absinthe of unreasonable risk while I do this, okay? I had to get that in there. All right. Thank you. All right. All right. So so let me go down. So I have like a whole hour keynote that folks can watch.
They can watch me do this regularly. So I’m going to, I’m going to do it informally just to get the ideas out. But. There’s a list of like seven things you have to do to be safe. And number eight is, and by the way, people have to think they would have, they would not have done better. That’s number eight, because that’s a non technical thing.
But there are several technical things. So I’m going to go down to seven, then we can revisit, we can do whatever. I’ll just read them off. One is positive risk balance, which is safer than the human. Number two is avoiding risk transfer. And that’s like dumping your risk onto somebody else, right? Number [00:31:00] three is avoiding negligent driving.
Reckless driving, things like this. Number four is you need to conform to safety standards, otherwise you don’t actually know how safe you are. UL 4600 we just mentioned, that’s what we’re, that’s what I’m talking about, right? Another one is you have to avoid recalls, which are completely different risk and safety model than all the rest.
Recalls are very different. Number six is a bunch of ethical and equity concerns and number seven is I think you need sustainable trust and actually sustainable trust does sort of get back to the people believe you’re safe and that’s all, you know, all the squishy stuff ends up. What do you mean? I want to.
So how do you want to go at this? It’s like, I just, I just read it off seven things and each one is like, a lot of them I think are pretty straightforward,
[00:31:42] Anthony: For our listeners. What do you mean specifically about the, the avoid recalls? Cause you said that Michael’s nodding aggressively and I’m like, I, I’m lost.
[00:31:50] Phil Koopman: So, you know, let me go down the list so we don’t get too chaotic, but then we’ll stop and talk and then move on. So positive risk balance is safer than human driver, and the problem is, what do you mean by that? And [00:32:00] what are the statistics? And you don’t have a billion miles yet, so, so you’re just guessing and, you know, we already had that discussion, but that’s, and, but if you say, well, we killed twice as many people as a human driven car, I kind of think that’s a non starter.
So, so you have to be as safe as a human on the statistics or else it will catch up with you and be a problem, but it won’t be a problem today. Because it takes so long to find out how that turns out. Well, actually, if it’s really bad, it’ll be a problem sooner rather than later. But it’s going to be a long time until you can prove you’re okay.
So it’s not, they’re selling on that, but it’s actually, you know, it’s all downside, no upside. Number two is risk transfer. Alright, so I’m going to give you a hypothetical, and I stress this is hypothetical, okay? So I’m, people say, but positive risk balance is all you need. What, what do you mean there’s other stuff?
Let me give you a hypothetical to prove that’s not true. Let’s say that you cut the fatalities. By a factor of four. 40, 000 people to 10, 000 people die in the U. S. every year due to car crashes. Because every single car is a robo every single car is level 4, robo taxi, self driving, [00:33:00] whatever. People are always asleep in the back.
Every single car drives itself. And you go from 40, 000 to 10, 000. First of all, the 10, to be a problem because why’d they die? But let’s set that aside. What if all 10, 000 are school kids in a crosswalk with their backpack walking to school? That’s not going to be okay, right? Because that’s more pedestrians than die, and by the way, all of them were super vulnerable.
People are going to rebel. Now, I didn’t say it was going to turn out that way. I just illustrated that who dies matters. And so risk transfer is, if half the number of car occupants die, but the number of pedestrians who die doubles, even though the total number of fatalities went down, the fact that pedestrians are paying a proportionate increase in lethality is going to be a problem.
It’s
[00:33:43] Anthony: your A. V. company. I’m going to sell a lot of elephant masks. Because your car will avoid them. Alright, sorry.
[00:33:50] Phil Koopman: We’ll avoid them. Well, no, it’ll just turn over to the human driver who’s not expecting it. And then you can blame the driver. We’re not going there. That would never actually happen, to be clear.
But the idea [00:34:00] is that if you’re, if you’re lowering the total harm, But proportionally increasing the harm of vulnerable population, especially people who are not actually directly benefiting the technology, that’s going to be a problem. That’s another limit on safety. So people like, so what’s the acceptable risk?
It isn’t actually risk. It’s, there’s a bunch of constraints of things you’re not allowed to do. And when you meet all the things you’re not allowed to do, then you can optimize safety further.
[00:34:26] Negligent Driving and Its Consequences
[00:34:26] Phil Koopman: So you’re not allowed to dump risk onto vulnerable populations. That’s just not going to work socially. Number three is avoid negligent driving.
So, again, thought experiment, not saying this would happen, just trying to sort of make the point. Let’s say you go from 40, 000 fatalities down to 1, 000 fatalities. Forty times increase in safety. Who could, who could, who could not love 40 times safer? Every single fatality is a car blowing through a red light and running down a pedestrian.
Okay? So that’s negligent driving, negligence per se. [00:35:00] Okay. All right. Negligence per se. If you ran a red light, you ran a red light and you killed someone, that’s not going to be okay. Okay. Breaking the law and harming someone is not going to be okay in, in, and if I’m a human driver, let’s say, hypothetically, I drove a billion miles and didn’t kill anyone.
So does that give me a free kill? Because I’m, you know, if I kill someone on purpose, I’m still ten times better than average. What’s the problem there? And when the, the, the robo taxi and, and, and self driving companies, you know, all the self driving industry says, but we’re saving lives, why are you so upset at this thing?
The answer is if it’s negligent driving, statistical goodness is not an offsetting consideration. If you broke the law and killed someone, you broke the law and killed someone. I don’t care what you’re driving. I mean, maybe I give you a lighter punishment in recognition. But you’re not allowed to do that on purpose.
Can’t drive negligently. That’s another constraint. So let’s say I can make better time by running stop signs, [00:36:00] but my LIDAR is so good I can make sure that I see everyone so I will never hit anyone running a stop sign. I swear. Then you run a stop sign and hit someone. You don’t get to say, but I saved so many lives, let me off the hook.
It’s like, hey dude, you ran a stop sign. We’re done. You can’t you’re not you’re not you’re not allowed to run stop signs. You’re just not if you don’t like it Let’s have a nice legislative discussion to change the road rules But until you do that, you’re not allowed to break them just I love that argument I’ve
[00:36:25] Anthony: saved all these lives because I haven’t killed anybody
[00:36:30] Phil Koopman: Yeah. And therefore, and therefore, right? So you can’t, you can’t drive recklessly. You can’t drive negligently.
[00:36:36] Legal Challenges and Accountability
[00:36:36] Phil Koopman: And I know there’s a lot of lawyer stuff there, but we’re not going to, we’re not going down that road. It’s
[00:36:39] Fred: just a question. Isn’t it true that in almost every state where AVs are allowed, there’s no legal mechanism for restricting cars that are going to run a stop sign and kill somebody?
Isn’t it? I mean, it’s just, this isn’t just a question of whatever. You can get out of a tort litigation, Michael. [00:37:00] I mean, how do you really defend yourself against that? Exactly that. A Navy running a stop sign and killing you. What is the defense?
[00:37:11] Phil Koopman: Well, you have to sue whoever is responsible. You have to sue who is responsible, which is usually not the manufacturer. Michael, what? And in one, and in one state is the, is the, is the computer itself who has no resources and you can’t, you can’t really sue.
[00:37:24] Fred: So the laws are set up basically so that you’re.
You’re shit out of luck.
[00:37:32] Michael: Well, I mean, we, we saw, you know, we’ve seen recently California attempting to address, and we’ve discussed this a couple of times, attempting to address the problem of issuing citations to autonomous vehicles, which they’re trying to establish a way of. Traffic citation approach to enforcement over autonomous vehicles.
So if there is a negligent act, you would assume that the autonomous vehicle would be ticketed or [00:38:00] be shown to be in violation the same way a human driver would be under those circumstances. And that violation would then allow for someone to, you know, attribute negligence to the vehicle. So that’s not something that every state has.
But it’s something that, that certainly needs to be in place,
[00:38:20] Phil Koopman: but that, that accountability is still a mess. And it’s going to take years to work out because all of it, there’s a bunch of laws that make it hard to get at the manufacturer to hold them accountable. And that’s presumably the intent, the intent of having written laws that way.
[00:38:33] Anthony: From what I think it’s suspicious that you’re walking through intersections at night. What are you doing, Fred? And what are you out there going, huh? Who are you interacting with? Who are your friends? What are you trying to buy?
[00:38:42] Fred: Did
[00:38:42] Anthony: Harry tell you about avoiding a tank? No. Okay. We’re not going to get in your avoiding a tank story.
All right. Continue. What do we got?
[00:38:51] Safety Standards and Industry Practices
[00:38:51] Phil Koopman: Okay, so safety standards conformance, because if you’re not following your own voluntary, your own industry consensus safety standards, what are you even thinking? But, [00:39:00] but unlike all the other industries I’ve worked with, the car industry doesn’t have to follow their own computer safety standards, so they, you know, they some, the supplier chain does a pretty good job, but the manufacturers pretty much don’t, and the, the self driving car companies of various types that They, some of them are just outright disdainful of the standards.
It’s like, yeah, we took a look. We’re special, Snowflake. It’s not for us. A couple have started saying, no, no, we’re going to conform to all 4, 600. And I really applaud that. But the industry still has a long way to go to following standards. Now, why is that? Well, if you have to wait a billion miles or 10 billion to see how it turns out.
What, what do you just, we’re gonna do whatever and see how many people we kill and, and tell you how it turns out later. I mean, you have to do better than that. Now, the companies will say, will we file our own rigorous engineering processes? I don’t know what those are. In the past, when I looked at conventional car companies following their engineering rep processes, the processes were laughable.
So maybe it’s better here. I hard to say, but there’s no accountability there. They’re gonna do [00:40:00] what they’re gonna do. They’re, we’re in a hurry. We’re outta runway. We’re gonna make our decisions. We’re so smart. It’ll be fine. Trust us, right? Well, instead of that, you could follow your own safety standards.
Now, it doesn’t make guarantees, and certainly you can cheat and, and, and play games with safety standards. But if you at least say you’re following the safety standards, then it makes you pay attention to things you’re supposed to pay attention to. And it gives more accountability later, where if there is someone who’s harmed, they can say, well, you said you followed your safety standard, and here’s where you didn’t.
Is the difference? I mean, following safety standards, if you don’t follow the safety standards, I don’t know how you the difference is when, when
[00:40:36] Anthony: you see this with companies, is it companies that are more established versus newer startup ones? Because I’m just thinking from my own software experience, like large established companies, like before we kicked off a project, I remember sitting, we had the first day was like, this is how we’re going to name classes.
This is how we’re going to name methods. This is how, I mean, it was an entire day of just like, here’s the instruction manual. Whereas startups are just like, Hey, I did something. [00:41:00]
[00:41:00] Phil Koopman: Typically, the startups go, we just got to get it working and we’re going to do safety later. And there’s a tremendous financial in, in deadline event.
And I, I’m not happy with that, but it is what it is until they take into, we can talk about the effective and safety drivers used to be more of a problem today. It’s not, that’s not a fight worth fighting in until they take the safety driver out. But when they take the safety driver out, they may be doing it because they’re about out of money or they have a deadline and, and what they’ll say is we have our own in rigorous safety standards because we have a bunch of smart people.
We read all the safety standards, we put cooked up our own version and it’ll be fine. And in, I don’t want to judge that. Let me do this a different way. Okay. You should expect to get what you incentivize. And if you incentivize meeting deadlines over safety, guess what you’re going to get. Okay, if you incentivize getting on the road before you run out of money to, to get the next round of funding rather than conformance with safety standards being a [00:42:00] gate, you’re going to get a safety standard that, that, a safe, a safety standard that they in their hearts may think, yeah, that’ll probably be good enough, although most of them don’t have enough safety experience and training to really know that and CEOs who are like, yeah, you know, I’m willing to roll the dice.
Right, that, because that’s what you’re incentivizing, so why wouldn’t you get that? Okay, so that’s, you know, they should be following their safety standards. There is varying conformance to safety standards, and the OEMs have the same issue. Their supply chains are usually pretty good at it, but the OEMs don’t necessarily follow them either and so there’s a lot of messiness there.
In, in Europe, they’re just getting around. So in Europe, they’re not actually con fall required to conform to ISO 26262, which is the conventional safety standard. It’s been out for more than a decade. They’re starting to get, well, the assessors have to be trained in it, but that’s not the same as conformance.
So, so this is an issue for the traditional car industry as well. It’s going to be decades before we get there, but the, the self driving car industry is, is, it’s way worse. Okay.
[00:42:59] The Role of Recalls in Vehicle Safety
[00:42:59] Phil Koopman: So [00:43:00] the next one is recalls, right? So here’s the thing. NHTSA, National Highway Traffic Safety Administration, who’s the regulator for, for ground vehicles, Does not, I’m going to over general, I’m going to generalize here, right?
You don’t see recalls from NHTSA based on not as safe as a human driver. You don’t see that. You don’t even see recalls based on high crash rate. Because if you look at to the degree insurance data is predictive of, of safety, which is, is Only weakly predictive, but if you see a car with ten times the crash rate of another car, now some of that could be driver, you know, it’s hot rods, so drivers are acting unsafely.
I get all that, right? But for sure, not every car is the same safety. For sure! But they don’t recall on that. They recall on, here’s a dangerous behavior this dark car displays, or here’s a component that tends to fail that leads to danger, go fix it. So they’re recalling for High, so like spikes in the risk profile.
See that [00:44:00] spike? Beat down that spike. That’s a recall. So we’re going to recall because you’re running stop signs on purpose. Not because, or we might recall, kind of, sort of, because we think your drivers aren’t paying attention. But even that one’s squishy, because it has to be something that’s specific, that’s fixable in a cost effective way, that people know how to fix, you know, there’s all these requirements.
So, you could have a car that is saving lives. It’s a hundred, every other criteria I mentioned, it aces every single one of them, but you know, it just keeps running through stop signs, even though it never hits anyone. It keeps running through stop signs, and NHTSA has in the past done a recall for running through stop signs, because they just think that particular behavior presents unreasonable risk.
So it’s another, another, all of these are, yeah, yeah, so you got the average fatality rate down, but you have these lumps and spikes and, and, Concentrations of risk, it isn’t all flat, there’s these bumps, and every one of these [00:45:00] is a different take on, there’s a bump, you’re not allowed to have bumps on it.
So the average has to go down, or at least be no worse, but you’re also not allowed to have identifiable clusters in risk, and, and recalls is one, so, and recalls, it says, safety recall, right on the label, right? Right on the tin, it says safety recall, so that’s, to be safe, you have to have very few, if not zero, safety recalls.
And for sure, once there’s a recall, you aren’t safe. By definition, you aren’t safe until you fix it, until the remedy
[00:45:28] Anthony: comes out. Now, I understand that one. That was, I was like, wait, are you turning? Okay.
[00:45:32] Phil Koopman: Got it. But, but you can, you can have a vehicle that is objectively safe at a statistical basis and still full of recalls.
There’s not, there’s no contradiction there, right? Yeah, I think there’s
[00:45:42] Anthony: one company that’s their advertising motto.
[00:45:43] Phil Koopman: We’re not going there.
[00:45:44] Ethical and Equity Concerns in Autonomous Vehicles
[00:45:44] Phil Koopman: Okay, so, number six is ethical and equity concerns. And I’m not talking about the trolley problem. Thank you. Oh, just, we’re not going there. Everyone says trolley problem, nope, nope, go away, go away. That’s not what I care about. But there are a bunch of, ethical and equity. Equity [00:46:00] are things, some of the risk transfer is equity.
You know, you’re saving lives, but it’s only the lives of the people who are paid, and, and, and you’re actually putting other people in more risk, right? But also, the company is saying, well, you know, we’re we’re, you should adopt robo taxis because they’ll bring food to food deserts. You know, the RideHail guy said that.
It didn’t, didn’t really happen because those people can’t afford, afford the services, right? So there’s a bunch of promises that are being made that, that There’s no reason to believe they’ll really come true unless there’s regulatory pressure from PUC and that’s not happening. But there’s also who gets to decide whether it’s time to go on the road.
It’s the people who are worried about making money, not the people who are being the, the test subjects sharing the road with these things. So there’s a bunch of ethical and equity questions. Questions and that, that gets to be a long conversation, but, but there are some things that that don’t really look like all the others.
Things like, are you blocking fire trucks? Are you, you’re, you have negative externalities on your risk? You’re blocking fire [00:47:00] trucks, well that doesn’t show up in any of the typical risk calculations, right, but it’s, it’s clearly a social issue. So there’s a bunch of complexity there. And the last one is sustainable trust, and that is, I guess, when I, when I created this list, I hadn’t thought of it this way, but this comes back to if people, people think you’re dangerous, it doesn’t matter what the statistics say, if they think it’s, They think it’s a problem, and you have to get out in front of that and be trustworthy to the point that, yeah, yeah, there was a crash, but we told you there were going to be crashes, and here’s the reason it’s okay, and we told you all these things before the crash, we’re not just making it up to cover ourselves.
So that’s my list. You have to do all seven of those
[00:47:36] Anthony: things if you want to be safe in that’s a great list. I like that. I think every single company struggles specifically with that last one because they’re all saying we’re safer. Oh, that thing that went wrong.
[00:47:46] Phil Koopman: Well, well, they, they try and define it as narrow as they can to make it easy to measure and easy to tell a story.
Look, we’re so many times safer and it’s the, that’s actually in the long run. It matters, to be clear, it matters, but in the short run, it’s the least important [00:48:00] metric.
[00:48:00] Anthony: That’s interesting, but hey, I’m going to switch. This is my transition. Isn’t AI just going to solve all of these problems?
[00:48:08] Michael: Well, did we
I was going to ask, are we going to get to using all those seven things? How is it that we can put unregulated cybercaps on the road? Since you promised everyone we were going to talk about that.
[00:48:20] Phil Koopman: Oh, yes. All right. All right. So let me, let me do that one quickly. You know, all these things, it’s a long rant on my, on my sub stack that people are welcome to read.
So I’ll do the quick one. All right. Okay.
[00:48:32] Unregulated Robo Taxis: A Cautionary Tale
[00:48:32] Phil Koopman: Now this is, it’s not Halloween, but for all the listeners, pretend that I have a flashlight pointed up at my face and I’m telling a scary campfire story. Okay. But this is a way that if someone Where someone wanted to put unregulated robo taxis on the road and evade all regulation, there’s a clear and obvious path to do it.
And, and if you want to fault me for telling them how to beat the system, I have to assume anyone who wants to do this is at least as smart as I am [00:49:00] and would have figured it out on their own. So this is more a cautionary tale so that other people, when they see it happening, know.
[00:49:06] Anthony: So,
[00:49:08] Phil Koopman: here’s the first one and this one people say, well, that’s unrealistic, but you have to understand the first step is not a means to the end.
The first step is a step on a path, not the end, right? Okay. The biggest thing keeping a road with taxis without steering wheels off the road is federal motor vehicle safety standards, which indirectly, not on purpose, but indirectly requires steering wheel and some other things, right? Okay. Yep. Yep. Why?
Because you have to measure how far you are from the steering wheel, there’s no steering wheel, you can’t measure it. Alright, so what we’re going to do is, we’re going to replace the steering wheel in the vehicle with a touchscreen, which is already there, right? And we’re going to put touchscreen. And we’re going to say that’s the steering wheel.
Okay, that’s the steering wheel. There’s a steering wheel, it’s right there, see, you can see it, it’s that round thing on the screen, that’s the steering wheel. Okay? [00:50:00] And and none of this has to be 100 percent oh, but, oh, but. It doesn’t matter, oh, but. All they have to do is just say, assert it with truth.
And now somebody, the burden is on regulators to prove that they’re wrong. So they’re flipping the burden. It’s just like, we’re just going to say it. You know, they’re going to have to deal with it. So this picture of a steering wheel is on the touchscreen. And you can actually put your finger on the steering wheel and move it.
And the steering wheel moves and it steers the car. So it really is a steering wheel. It really is. It really is a steering wheel. Okay. So, there we go. If you have a car that 99 percent of the time drives itself and you only need to drive it 1 percent of the time in, in benign situations, maybe that steering wheel is actually viable.
Maybe that is actually a safe car. I don’t know. Assuming. The other stuff is safe, right? So you make that, okay, so we have a car with a picture of a steering wheel. Now you can self certify that you’ve conformed to FMVSS because you measured the distance of the steering wheel. It’s right there. You, you actually can do FMVSS and self certified FMVSS with no exemption.
So people say, oh, they’re going to do exemptions for robotaxis. [00:51:00] Well, those don’t help a big fleet because you only get so many exemptions, right? But if you self certified FMVSS, you don’t need exemptions. You do as many as you want. Who would self certify to FMVSS without a steering wheel? It turns out Zoox already did that.
So I’m not making this up. Somebody already did this. The touchscreen steering wheel is unique. Every other step in here, people have already done. Right? So we’re FM, you know, put the sticker in. We’re certified FMVSS. We’re Robotech without a steering wheel. And, and I don’t know what the rationale for Zoox doing theirs is.
And it says investigating in however many years. We’re going to find out how that turned out. But I’ll bet if you put a picture of a steering wheel in, it’s pretty straightforward to self certify. Okay? All right, so now, now, I’m just pointing out this is a path you could take, right? So you’re self certified FMVSS.
That means you can go on the road. And by the way, you’re going to add self driving, okay? You’re going to put in whatever self driving you already have that works most of the time, but not all the time. Now, to our previous discussion, you want to claim your level two, [00:52:00] so you’re not regulated, right? The first regulatory hurdle is FMVSS, but you’re just going to self certify to the picture of a steering wheel.
And the second one is, but you can’t be level 3 or irregulated. No problem, we’re level 2. Why are we level 2? Well, because there’s a remote safety driver. And at first, the remote safety driver actually has a steering wheel, and they have a brake, and they have an accelerator. And they’re watching the entire time for what happens.
Okay? Now this is literally what they’re doing in China. So they’re already doing that in China. That’s how the robotaxis are in China. But then over time as you gain confidence, you can say, you know what, he doesn’t need a real steering wheel. He needs a picture of a steering wheel. So you give the remote driver a picture of a steering wheel and a button for brakes and accelerator.
And eventually you say, you know what, that’s it, we’re just going to give him a big red button. And he presses the red button, the thing comes to a stop. It’s like, well, you know, boy, that costs a lot. Tell you what, we’re going to give that remote driver two cars to monitor, four cars to monitor. A hundred cars to monitor, and all he has to do is [00:53:00] press the red button when he sees a problem because we have so much confidence in our technology.
But we’re going to claim it’s level two because that remote operator is responsible for completing the object and event detection response subtasks, so that makes it level two.
[00:53:13] Anthony: Perfect.
[00:53:14] Phil Koopman: Okay. And, and when there’s a hundred red buttons, what we’re really going to do is the car’s going to ask for help almost all the time, and the guy’s just there to take the blame for not pressing the red button when he should.
Alright? So now you have a, what is for all, if you, if you’re a rider and you get into this, it’s a rubber taxi, right? For all practical purposes, you get in, there’s no steering wheel, they took away the picture of the steering wheel because for certification it’s there, but when it’s under remote control, the picture of the steering wheel is shown somewhere else, so you know, it’s fine.
During FMVSS testing, the picture’s there, so you pass the test, and you don’t show that picture, you’re showing them a movie or the trip information or whatever, and there’s a little tiny picture of the steering wheel in the corner, so it’s still on FMVSS, whatever. Okay, so you’re gaming that. And for all practical purposes, and that [00:54:00] steering wheel better be disabled when there’s a non licensed driver in there, but it’s there.
Okay, so, for all practical purposes, you’re running a robotaxi, and there’s a remote operator who’s there to, to accept blame for not pressing the red button when they should. And probably they won’t and, but, you know, if you believe in your heart that your stuff is safe, then you may morally justify to yourself that this is all okay.
And then you put on a rate hail service. You put it on an Uber or Lyft and you say, but it’s a level three remotely operated vehicle. What’s the problem? I see no problems. So that’s how you do an unregulated robo taxi without actually violating the letter of the law. You’re, you’re badly bending some of the things, but you’re not filing the letter and, and if you’re a company who’s interested in getting an IPO or near term stock price, you do this and it will be years, literally years in our current political environment, maybe four plus years before the chickens come home to roost.[00:55:00]
That’s my pharma, that’s my pharma
[00:55:03] Anthony: my remote operators in Pyongyang, so you, you know, good luck suing them. They’re outside of the law.
[00:55:10] Phil Koopman: Cuba works, Cuba works, and it’s a shorter haul for the fire department. Oh yeah, well that’d
[00:55:13] Anthony: be, you know, that’s my 24 7.
[00:55:16] Phil Koopman: Venezuela, you know, pick, pick your favorite non extradition country or, but even if they’re outside the U.
S., how do you, how do you do a field sobriety test on someone who’s in another time zone or another country or in a non extradition country? What do you, and what if those drivers aren’t licensed? How do you even check their licenses? And in some states, I’ll bet
[00:55:34] Anthony: they don’t have to be licensed. And with the latency issues, everything will be perfect and safe.
I like the fact that you put in there’s going to be a big red button because I think we’ve been asking for the big red button in these cars. I didn’t, I didn’t say that passengers can have access to it. Oh, right. It’s just my guy in Caracas.
[00:55:56] Phil Koopman: So what’s the point of telling this very scary story? Is that the [00:56:00] robotaxi companies are all saying, Oh, we can’t deploy because regulations, that’s utter nonsense.
They can deploy tomorrow if they want to. They’re just using that as a dodge because they’re not ready yet. But the other side of that same coin is they can deploy when they think they’re ready. And the regulators really don’t have much to say about it other than Nitsa can do recalls after the crashes have happened.
It’s about the only, so the, the only checks and balances we have in our system right now are NHTSA can do recalls and they’re going to be challenged to do that. And in the new administration, it’s going to be even more challenging to make any recall stick. We even see, they did a recall on Tesla and Tesla just sort of blew them off, right?
We already saw that for autopilot. And that was on the old administration. So, so recalls are a pretty blunt weapon. And the other thing you have is is lawsuits. And lawsuits take years, and they’re super expensive with this technology, and to the degree they require on product liability lawsuits, they’re very, very challenging to launch.
Really hard. For an [00:57:00] individual who’s been harmed to do a product liability lawsuit, it’s effectively out of reach in almost all cases. So there’s not a lot of checks and balances left. Michael, I invite you to chime in on it because I just went into lawyer land, but yeah, it’s something we talked
[00:57:13] Michael: about previously, I believe when you and you and Bill were on or when we and we talked about just the problem with proving that an AI is, is.
I mean, what kind of expert witnesses can you get to do that? And is it even possible to prove sometimes whether or not the, the AI is at fault?
[00:57:37] Phil Koopman: And, and you can prove it behaved badly, but it might be that it’s working exactly as designed and there is no design defect, just once in a while, it does stuff like that.
And that’s state of the art. And then what do you do with Well, there’s
[00:57:47] Fred: another problem, which is that the AI processes are inherently opaque to human beings. So this, The only thing you can do to figure out the relationship between A input and B output [00:58:00] is a statistical test, just run it over and over again, you know, until you develop statistical confidence that it’s operating the way you want to do it.
There’s no way you can linearly, linearly trace the input to the output.
[00:58:16] Phil Koopman: And the argument you’re going to have is, it stops at red lights, 99. 8 percent of the time. You can’t prove it’s 100, can’t prove it’s 100, and it’s probably not 100 because it’s statistical. And then the counter argument is, well, people don’t always stop at red lights, so what’s your problem?
And the problem is that we were promised that we’d be better than people, and it’s going to make the same mistakes people make just for different reasons. So welcome to the
[00:58:40] Fred: future, listeners, and in the future, everything will be better. We’re looking forward to that.
[00:58:46] Anthony: But yeah, this is what I, what I was saying is that AI is solving all of these problems.
I mean, this is what’s going to happen. Actually, there was, I think it’s, I don’t know who it was. It might have been head of two different AI companies saying, Hey, we’ve pulled in all the data on the [00:59:00] internet. And you know, we’re running, we ran out of data to train models off of. And it just pointed out to the fact that, wow, this is a lot more artificial than intelligent here.
I mean, Well, and then what we’re seeing is
[00:59:10] Fred: they want to cascade the AI solutions, right? So there’s, there’s now somebody who wants to put an AI filter in between the cameras and the AI system that’s figuring out how to drive the car. There’s a tire company that wants to put an AI engine in place you know, and what could possibly go wrong?
And we haven’t begun to really begun to confront the problems that introducing AI introduce into a safety chain or safety design. And now we’re talking about cascading them three and four fold in any individual vehicle as part of the overall operation. So what could possibly go wrong? You’re
[00:59:56] The Challenges of AI in Ensuring Safety
[00:59:56] Phil Koopman: Well, we’re getting towards the end, but there’s one important thing about AI I want to say, which is [01:00:00] there’s a fundamental reason why this technology is so hard to get for safe, for safety, right?
It’s very fundamental, and it’s not going away. It has to do with what Fred was talking about. Anything AI, by AI, I mean machine learning, as opposed to the previous generations of AI I’ve used over the decades, right? Machine learning. Machine learning is statistical analysis. So statistics are really great for typical, usual Somewhat off, but, but, you know, stuff, stuff sits, stuff sits in the training data.
Yeah, there’s a hundred examples in the training data. It’ll figure it out, right? He’s just using a number. Okay. Machine learning and statistics in particular are bad at extremely rare events. And you can use, you can do things with extremely rare events under a bunch of assumptions that are simply not true of the, the real world.
You know, Gaussian distributions, you know, please, that’s not what you’re going to see, okay? And so, in fact, the challenge is long tail, in particular, heavy tail distributions, where there are a large number of rare events, but it’s always a different one, you know, it’s always something, it’s not one thing, it’s another, okay?
That, that’s [01:01:00] the heavy tail, and statistical approaches are Okay, so that’s part one. Part two is safety. What’s safety about? Well, if you can’t drive three blocks without hitting something, you’re not even on the road. That’s not safety. I mean, it is safety, but that isn’t what we worry about. Safety with a competently engineered system always boils down to very, very rare events with extremely high consequence.
Okay, so airplane crashes, they don’t happen, but we pay attention because maybe a hundred people die, okay? So safety in a well designed system is always rare events with high consequence. So you have machine learning, which is, which is the worst at rare events. And safety, which only cares about rare events.
And you’re going to smush them together, and that’s going to be a problem. So in a nutshell, that’s why safety is so hard, is because safety is all about rare events, and that’s the thing machine learning is worst at.
[01:01:57] Anthony: The example you gave earlier of the Waymo not being able to [01:02:00] identify a Car being towed because it didn’t follow, you know, something in that model.
[01:02:06] Phil Koopman: It wasn’t in the training data. So educate, people talk about edge cases. The definition I like for edge cases, something that was not in the training data, but matters. Right. It wasn’t the training data. It’s a rare event, high consequence. They hit it. Right. And, and there’s, for practical purposes, an infinite number of those.
And it is very challenging to get those to happen infrequently enough. You’re willing to suck up the residual risk. That is what the story of the robots, that is the story of the self driving car industry for the last decade, and it’s going to be the story of them for the next decade. Yeah, I think their AI
[01:02:36] Anthony: approach is just incorrect.
I mean, they’re, someone needs to create a, an actual heuristic model that, that is similar to the brain.
[01:02:43] Phil Koopman: Well, they think, they think if you pour enough tens of billion dollars, you can chase down that heavy tail and get enough that you’re okay. And so far, They haven’t really got, you know, Waymo says they’re there, but we know that’s, we know from our discussion there’s a lot more to it than that.
So Waymo’s busy chasing down that heavy tail by [01:03:00] pouring, by brute force application of money. They think they’re going to get there. I’m skeptical because that heavy tail’s a real bear. We’re, come back in five years, you know, it turned out. Yeah,
[01:03:09] Anthony: and I think it’s even, like, ingest all the data that’s on the internet.
And have that, and you still can’t get this right, I think there’s a problem with your approach. Because there’s no human that can come close to that, but we can all identify a car being towed. At any angle, with any light of sun hitting it.
[01:03:24] Phil Koopman: So machine learning does not impute common sense. Very simple heuristic model.
[01:03:30] Anthony: I know they’ve been talking about that forever, and that’s kind of, Hey, we wanna just do this. But it turns out that’s really, really, really hard.
[01:03:38] Phil Koopman: Machine learning doesn’t give, doesn’t give you the ability to have common sense. There’s a, there’s a whole different discussion there, but let’s boil down to until they, until you have common sense you’re gonna struggle with, with rare events.
And,
[01:03:48] Anthony: and I wanna stress for, for listeners, a rare event is something that, you know, a toddler can handle on, on a day-to-day basis.
[01:03:58] Phil Koopman: People, people aren’t perfect, but they’re [01:04:00] really impressive at rare, unusual events. I have my own list of personal stories that, you know, if I weren’t good at handling rare events, I wouldn’t be around.
I’d be dead. Okay. And everyone has their stories, right?
[01:04:11] Closing Thoughts and Nature Stories
[01:04:11] Anthony: Ah, well, hey, Phil, thanks for sitting with us for so long. This has been great. This is, I mean, now I want to bring you back for more. AI in depth stuff. Mainly I just want to talk about submarines, but that has nothing to do with this podcast. Like, are you guys out there cleaning the sub?
Like, is it great barnacles and stuff? Anyway, that has nothing to do with this show. Thank you so much for coming. Thank you listeners. We’ll be back next week.
[01:04:35] Fred: Thank you, Phil. And for everybody who might have found this a little bit grim, I want you to know that right now as we speak, there’s a blue bird eng in at my feeder in the middle of January.
Makes me very happy. There’s a fox
[01:04:49] Michael: sleeping. I’ve got a fox sleeping in my backyard. Who else has nature stories
[01:04:56] Phil Koopman: now? I have a, I have a bird feeder. That’s about it. But [01:05:00] thank you so much for having me. I really appreciate it. Bye
[01:05:01] Fred: bye. For more information, visit www. autosafety. org.