Automated Vehicle expert Phillip Koopman

AV expert and Carnegie Mellon professor Phil Koopman joins us to discuss his new book, “How Safe Is Safe Enough?: Measuring and Predicting Autonomous Vehicle Safety” Plus the Tao of Fred enlightens us, and puts Anthony to sleep, explaining Minimal Risk Condition.

Subscribe using your favorite podcast service:

Transcript

note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

Anthony: This week we have a special guest, Phil Koopman, author of how Safe Is Safe Enough.

He was a former submarine officer. He is the internationally recognized expert on autonomous vehicles. He’s got over 25 years experience in that, and he is a professor, associate professor of electrical and computer engineering at Carnegie Mellon. I know that’s the one minute introduction of your background.

If there’s anything you want to add, please feel free.

Phillip Koopman: Hi everyone. I think the one thing I’d add more specifically is I’ve been doing self-driving car safety, or autonomous vehicle safety as we call it now, for more than 25 years. So I’ve been at this game a long time. Wow.

Anthony: So we read your book, how Safe Is Safe Enough, and before you joined us, Michael and I were talking briefly and saying how it was like the first academic type book that we’ve read in a long time that was like, wow, this was really engaging and it didn’t feel like a kind of stiff academic tone.

It was completely accessible. It was great. It was really fascinating. So we really enjoyed that.

Phillip Koopman: So I thanks for that. I cheated by it. It’s not, the tone is definitely not academic. There’s, come for the heavy weighty text and stay for the snarky footnotes.

But , I wanted to make it engaging cuz the audience is not other academics. The audience is a lot of other folks. ,

Anthony: And so I guess my first question is , I’ve been told my self-driving car is ready and it’s ready to go, and it’s been ready to go for the last decade. That’s not true.

Is it? ? How far off is this?

Phillip Koopman: Nobody No. Really knows . That’s the truth. The, oh, first of all, lemme unpack that a little bit and see. Since I am an academic, I’ll sometimes slice apart some words, but it turns out words matter in these things. Self-driving means everything and nothing all at once.

And the marketing has just rubber banded it around. It’s hard to say what it means. And it depends on the context. So I the title is Autonomous Vehicles because that actually has a pretty well defined meaning in safety standards. And so to begin with, the scope is cars that drive themselves or some to say you can go to sleep in the back and wake up every single time still alive, right?

That’s right. That’s the kind of tar, and by, by the way, when I said snarky, it’s stuff like this , right?

Anthony: So that’s what I’m looking forward to. I can get, I can inhale a driverless car, fall asleep in it, wake up hours later at my destination, fully bathed, dressed, shaved ready. I

Phillip Koopman: don’t know about the end parts, but , you can sleep and wake up every single time.

And those cars are out there now they’re finally out there, but they’re incredibly limited and people say that it takes more cars to drive one of them and then a regular taxi because of all the prep and the remote support and all the other things are going on. So there’s a very few places in the world where you can get in a car.

There’s nobody in the front seat and there’s nobody remotely driving it. Using a steering wheel and a cell phone data link, right? It actually is driving itself, but it’s in extremely limited geographic areas. It’s in very limited conditions. It still struggles a lot. It still needs the phone home for help.

And there’s service, there’s chase cars out there, so if something goes wrong, somebody shows up to fix the problem. So it’s a very limited very trial early phase kind of thing. And that’s where we are. After 20 years or so, 25 years. This is it’s been that long since this stuff has been going around the us It’s taken a long time.

Anthony: We were talking about the example last week with the GM cruise in San Francisco, where I think it was available for one day only from the hours of 10:00 PM to 6:00 AM if the weather conditions were absolutely perfect, there was no fog in San Francisco. Not sure how that’s possible. And

Phillip Koopman: it I’ve lived in the Bay Area at night.

It may maybe be at night, not in the morning for sure. .

Anthony: But I, it failed because a driver at a, the intersection across from it decided to juke and go left and then right, and whatnot, and the car just and gave up on itself. It’s absolutely fascinating that I would, I can’t understand who would actually hail a driverless cab at this point.

Phillip Koopman: Pe people love the novelty. Yeah, there’s that and, okay, fine. And this technology might, I use the word might, this is important someday improve road safety and provide a lot of equity benefits and all those things. The promises are there, but the technology is not there yet. It’s getting the point.

You can have these pilot deployments, you can have these demonstrations. They’re, they’re all basically extended demonstrations, proof of concept, that’s great. But people who think, oh yeah, next year it’ll all be solved. It’s it’s not gonna happen next year. It’s been I mentioned before 25 years.

So in 1995 Carnegie Mill University vehicle, and this is just before I got involved right after this, went from Washington, DC to San Diego, 98% hands off the. In 1995 on highways. Okay. And it was basically following the lines in the road. And when there weren’t lines, it turned out you could follow the the wear marks from the previous vehicles and the concrete and the, I don’t know if the oil spill stains or whatever, it was just saying, Hey look, there’s something that looks like a straight line on a road, let’s follow it.

And the times it got into trouble were, it was fond of taking off ramps cuz the lines and the off ramps were less worn than the lines in the main road. Okay, that makes sense. And when in bright lights, front bright sunlight, when you ran under a bridge, the auto contrast went nuts and it just didn’t know what was going on until it recovered.

Those are the main failure modes and it was great stuff, but very limited and just to be fair, it was steering only, but steering’s the hard part. Adaptive koge control is no big deal. It’s steering, that’s the hard part. But that was in 95 and ever since we’ve been working on that last 2% and it’s really hard.

It turns out it’s really hard. So when people say, oh yeah, it, we’re still struggling this year, but next year for sure. From one comp car company, we’ve heard that every year for many years, right? . But it’s just unrealistic to say you’re gonna have a car that does everywhere.

Everything, all the time, anytime soon. But they’re making progress. I don’t, they for sure are making progress, but is this catch with safety? Go ahead,

Fred: Fred. I’m sorry. Is this one of those situations where the horizon keeps receding? It’s like fusion energy has been like that.

It’s been right around the corner for the last, what, 50 years. And the feeling I’ve got about this is that as you push the horizon, because we’re working on that last 2%, that last half percent, whatever, you run into an infinite series and the horizon just keeps moving forward because the universe of things that people consider just keeps expanding.

Phillip Koopman: I I understand and appreciate the analogy. I’ve thought of that one on occasion, not that I’m an expert in fusion, but it feels a little different to me in that there they’re trying to get to the ability to generate more heat than they put energy than they put in. And then maybe they’re there now, they’re but this stuff, the proof of concept, the car driving itself 25 years ago, the car was driving itself, so it wasn’t can we even do it at all?

Yeah. We can do it. And the issue isn’t, is is that safety is fundamentally different than functionality. So the catches that if you drive a million miles and have no, no serious crash, that sounds really impressive, right? Clearly, if you can go a million miles without killing anyone, you’re, you know how to drive on a road.

But the bar is set a hundred times, 200 times, 300 times more safe than that. , even including all the drunks, it’s a fatal crash. Every a hundred million miles, that’s a lot of miles. 900 or you go 99,000,909, 9,999 miles without killing anyone. And then the next mile you do, that was a lot of boring miles before the bad thing happened.

So safety is a numbers game about all the rare exceptional cases. So it’s different infusion in that point. Fred cuz infusion, it’s about getting it to the point. You can generate the gen. And again, I’m not an expert of that, you need to be able to generate the hot mass and hold it for long enough to put out energy in what you’re there.

You’re there. This one you can be there. For tens of millions of miles and everything looks fine and then something pops up. You didn’t even think about that wasn’t in the model and that’s it, start

Fred: again. Sure. There’s another sense in which the horizon keeps receding, which is that most of the discussion I’ve heard is about making the self-driving vehicles, the autonomous vehicles, however you want to refer to them as safe as a human driver,

Phillip Koopman: Whatever that means.

There’s whatever means a book about that, right? .

Fred: So the, but the question that arises is would we ever be happy if automated machines were killing 40,000 people a year? But to me that seems like a a bar that it would never be accepted for machinery. and probably shouldn’t be accepted for a machinery.

So in that sense, as we get closer to the idea or the actuality of autonomous vehicles being as safe as a human driver, again, whatever the hell that means the standards likely to change because somehow the idea of accidents being caused by human beings is acceptable. We all go on about our lives.

All this slaughter on the highways is continuing to happen, but to have that being done automatically by machinery seems like a wholly different approach, and one that would be really unacceptable to everybody, certainly to me. .

Phillip Koopman: Yeah, I understand. That makes sense. One of the things I did in the book I learned a lot writing the book you always do is I looked up some European product safety standards and decided that cars would be completely unsafe as consumer products if there were anything other than a car.

That analysis is in the book just way, way too unsafe for consumer product. And when I say that, it makes sense cuz toasters don’t kill 40,000 people a year. It’s just not what you expect to have happen. And so yeah, I get that. The the way I look at it is that you’ve got these things that are that people are dying on the roads and it’s tragic, by the way.

Human drivers are actually very safe. The 40,000 people is not, cuz people are as unsafe as they’re made out to be. It’s because they’re that many miles and the numbers just catch up with you. People are incredibly good compared to what it’s gonna take for a computer to get to do this stuff.

But if you were to computerize all those deaths, which is the essence of your question, there would be some pushback. But it’s hard to say how that’s gonna turn out. People said yeah, 40,000 to die now, but 40,000 used to die, and now we don’t have to drive ourselves. Maybe people would be okay with that.

That’s a hard call for a technologist to make. But what I can point out is that the deaths are likely to be different, and that’s where the problem’s gonna be. Maybe we’re willing to accept the same number, but we’re gonna have trouble when there are identifiable patterns in who’s dying.

And they’re different than happened today. As a purely hypothetical example, let’s say the number goes from 40,000 to 30,000, but every single death is a pedestrian. That’s not gonna, that’s not gonna work. No one’s gonna put up with that because now you have people dying who wouldn’t have died before and that’s gonna be a bigger problem.

Or what if it’s now thi this is put out by some people as fact and it wasn’t actually in the paper, so we have to be careful. But for sure, machine learning based vision systems can struggle with people with darker skin tones. Now, there was a paper saying self-driving cars had that problem.

That’s actually not true. That was, if you read the paper, it just speculates it, there’s no finding. But it would be no surprise if that’s a problem. I would be completely unsurprised because we know this is a fairy mode of the systems if they’re not properly trained. So what if everyone, all the fatalities had darker skin color, that’s gonna be a problem too.

Or all of it was lighter skin color. It doesn’t matter if you pick out people who are identifiable as a demographic group and all of a sudden their mortality rate doubles, no one’s gonna be happy about that. And so I think the distribution is probably more of an issue than the total numbers. Part one.

And part two is they’re gonna have different crashes. They’re gonna be crashes. A human driver would’ve never made that mistake. And if you can say a human driver would’ve made the same mistake, maybe you, maybe that’s palatable, right? But if you say, no, human would’ve been that stupid, those crashes are gonna be a problem.

Even if a automatic autonomous vehicles are 10 times safer than human drivers, if every single crash is objectively stupid, it’s you gotta be kidding me. It did what? That’s still gonna be a problem for the technology. But

Anthony: self-driving car, is it applying makeup, eating breakfast, and texting at the same time?

Cause I drove past that woman the other day.

Phillip Koopman: Yeah. I, and yet you didn’t see her? In a crash. Did you? Oh no. I sped away from her. . So another related topic to all this is that one of the reasons the roads are safe is not because humans are amazingly good drivers. They’re really good, but they’re not that good and they misbehave.

But other people compensate for the behavior. So the, one of the things you don’t hear the companies talking about is them compensating for mistakes made by other drivers. What you hear them is blaming other driver, human drivers for being imperfect. We had this crash cause human drivers are imperfect.

It turns out human drivers are imperfect. Get over it. What’s your plan? And humans are really good at compensating for other driver’s mistakes. That’s one of the reasons the outcomes are as good as they are. Not that I’m happy with the road fatalities. But if you look at the numbers, it could be a whole bunch worse.

And one of the reasons it’s not as not worse is because humans compensate for other mistakes.

Fred: I was thinking of that as I was driving the other day and I was being really attentive to what the other drivers were doing, and I noticed that a lot of times when you come to an intersection, you actually watch the eyes of the other drivers to see if they’re looking at you or if they’re looking away, and you become much more cautious if their eyes are not being cast in your direction.

Yeah, that’s a great point. Yeah. I also wanted to say before we get too far into this, it’s a fantastic book. It just, oh, thank you. Brings up all kinds of great thoughts and and and it is such a great book that I had to notice there was a typo, one of the pages, but please send it to

Phillip Koopman: me and I’ll fix it.

the wonders of Print on demand, or I can change the typos on the fly. I do not have a basement full of these books printed already. ,

Michael: that’s great. And I wanna reiterate Fred’s point before we get too far into it as well, that I recommend this book. If we had a center photo safety book club, this would be the first book in it, at this point, and I wanna recommend it to all of our safety friends and to manufacturers and to folks working in congress because it really addresses a lot of the issues that we’ve been trying to tackle for years now.

And it does so in a very accessible way. And also, it. It leaves open the uncertainty that we have about the future of these vehicles, whether they can even achieve what a lot of people have claimed that they’ll be able to. So I highly recommend the book too. Every listener out

Phillip Koopman: there.

And to be clear, when you’re a safety guy, you’re your life is telling everyone why something might not work, right? And I get that I really wanna see this technology succeed, but if we’re not mindful of all the ways it could fail and then work to fix those ways, we’re gonna deploy something that does fail and it’s gonna set the whole industry back.

So it’s important to be mindful of all the problems that can go wrong and fix them. And I spend a lot, how did I realize all these ways thinking go wrong? I’ve seen them and I help companies fix them. And that’s great. That’s what I’m all about helping fix. But you can’t help fix if you don’t know what to look for.

Fred: In section 10.3 you talked about ways of getting this technology into service. And in fact, you’re talking about using utilities like UL 4,600 as a way of investigating the status of the vehicles, making sure the people respond to the safety cases. And it brought to mind our own position of looking at gated certification in which we’ve, we talked about it simplistically as a requirement that the o autonomous vehicles.

Go through a learner’s permit phase and a driver’s license phase, and then a mature driver’s license phase. But really what it’s all about is at each of those levels that there would be a critical examination of not only the past performance of the vehicle, but also its conformance to reasonable standards for safety and expectations that it would be safe as you expand the environment in which the vehicles are allowed to operate.

And in fact, it’s very close to what you were talking about in section 10.3, but if, if you want maybe you could give a few words about that and also tell our listeners what a safety case is, because I think that may not be familiar to a lot of the people listening.

Phillip Koopman: Sure. Remind me to come back to the safety case.

I’m gonna go at the driver test first. It, it seems intuitive. To say, all right, we’re gonna make the car take a driver test, and that’ll prove it. It drives fine. But there’s actually a fundamental issue here. Driver tests won’t make you safe. Testing does not make you safe. People have known this for decades.

It sounds intuitive. What we tested it. So it must be okay for software. It doesn’t work because there’s so many weird ways software can fail that testing doesn’t do it. Let me give you an everyday example. If you have a cell phone and you make a call on it, all right? And the call goes through, you can say it makes calls.

I tested it, right? But and you could say, open an app, and the app opened and it did its thing. So I tested it. So it works fine. But have you ever had your cell phone freeze up and you had to reboot it? Of course. Okay. Can you make it do that on demand? No, . So if that cell, if your life depended on that cell phone, never ever freezing, ever for a hundred million miles of driving, it never gets to freeze cuz then you die.

That cell phone probably is not up to that task, but there’s no testing that’s gonna show that to you in any reasonable amount of time, cuz it only happens once in a while in cer certain C circumstances. So why, that’s why this technology is so difficult. Any computer technology has this issue of its life critical.

So what we’ve learned is testing doesn’t get you to safety. You have to do engineering rigor, you have to follow really good engineering practices to get to safety. And that includes testing, but it’s only part of it

Fred: in terms of, sure. We com if I may, we completely agree with that and that’s why we’ve coupled this whole idea of testing with intensive investigation of the engineering background at the level of development that’s appropriate for the vehicle and.

Environment in which people propose to operate the vehicle.

Phillip Koopman: So that, that’s exactly right. That’s why the safety standards, ISO 2 6 2 62, ISO 21 4 48, you all 4,600, those kind of safety standards are so critical to know the engineering rigor was there. And by the way, I just rattled off some numbers.

If you’re not an automotive engineer, the fact I can rattle off these numbers. Calls into question my life choices at sometimes, , if you’re in the safety business, you just, those are all industry written thick stacks of paper. I said 2 6 2 6 2 is like 1200 pages or something. It’s, I stopped counting a thousand pages.

You owe 4,600, around 300. They’re really thick, very technical engineering documents, but they tell you, here’s how to know you got the software safe, and if you miss anything in these documents you didn’t, you missed crossing some t’s you missed dotting some eyes. Now you’re putting yourself in the public at risk.

This stuff may not be as safe as it needs to be. The only way you know how to do safety, why do, why are airplanes safe? It isn’t that air isn’t Boeing and Airbus fly the airplanes around for a million miles and say, yeah, it looks good. It’s because they do rigorous engineering and organizations like the f aa check their work.

And you get mishaps like the 7 37 max because they didn’t check their work, right? So that process matters. And in the car business, no one’s checking their work. They’re not required to follow these standards. So that’s really the big safety question here. . Yeah. It seems

Anthony: each week I ask the, both Fred and Michael some simple question and they always laugh at me and say, because there’s no regulation around any of this stuff.

There’s no regulation around propellant and airbags and everything else. Yeah. And it seems with avs what they tell me is, cuz I look at it as a consumer going, my car has, adas and I love it and it’s so much fun. And they’re just like, yeah, maybe it works. I don’t, cuz no one tests it.

There’s no regulation. Is that the kind of, and you, I think you alluded to this earlier and your book definitely mentions it. Is that the big missing piece that there is no f a type regulations like that apply to the airline industry or to train safety that the auto industry has just said, no we’re gonna self-certify.

Trust us. We’ve never done anything wrong. Is.

Phillip Koopman: They’ve never done anything wrong. You can read the news about that . Exactly. And to be clear, regulation doesn’t prevent them from doing anything wrong. It just reduces the incident and makes it it makes it much less likely. There’s, it’s, the checks and balances and transparency are how you get safety.

If you take away the checks and balances, you take away the transparency in the other, like in the aviation regula regulatory regimes, the failures are because they compromise transparency and they compromise the checks and balances, and they would’ve been working otherwise. In automotive, it’s the only safety critical industry I know where in practice the industry’s not required to follow its own safety standards.

And it’s, I’m gonna say it again cuz it’s shocking. The automotive industry, there is no requirement whatsoever to follow the own safety standards that they themselves wrote. No one’s making them do this and in practice they talk a good game and they say we do sort of stuff like it.

But the few times I’ve had the opportunity to pull back the covers and find out what’s inside, it’s no, they weren’t, it’s only a few percent. It wasn’t anywhere close. Some companies try really hard. I know that some companies. Do for practical purposes follow the standards.

But don’t, they don’t say that publicly. I know that some companies have good intentions and then run out of time and resources and it’s pretty clear some companies aren’t even trying. It’s everything. And the regulators. So National Highway Traffic Safety Administration, nitsa, who I’m sure gets talked about on this podcast on occasion NITSA does not require the industry to follow their own standards, and NITSA does not do what in Europe they call type approval, which is Nitsa doesn’t check the car before it goes on the road.

The manufacturers do what you said, C Certify put on the road, but they don’t self-certify even. That doesn’t mean what some might think it means. They don’t self-certify safety at all. There’s no requirement for computer software. I’m gonna call it software safety. That’s a misnomer. It’s actually computer-based system safety, but I’m gonna say software safety because people are more familiar with that.

I know what it means. Okay. So there’s no requirement for any software safety at all. That’s completely at the manufacturer’s discretion. Thank you for the only thing they self-certify is FM v s, which is, does your tire low pressure indicator work and are your headlights bright enough?

It has nothing to do with the software safety. I’m sure

Fred: that has cheered everybody up. Thank you for that. But

Phillip Koopman: I’m I just array of sunshine, what could I say ? It’s

Fred: good cocktail party chatter, I’m sure. Hey. But I’m gonna ask you to define two things. The safety case and the safety of performance indicator.

Cause I’ve got a follow up question that relates

Phillip Koopman: to that. Thanks for being back to the safety case. You know what, I, we have enough time. Let me backtrack cuz there’s another one we drop by the side and then we might keep you on track. The driving. Okay. The thing about the driving test is you have to take the written test, you have to take the vision test, you have to do the skills test.

And an autonomous vehicle could one way or another do all that stuff. But the most important part of the test that no one ever really mentions, cuz it’s so obvious, is the part where you produce your birth certificate. And you prove you’re a 16 year old human being. Right? And that’s a proxy for being able to predict what happens next.

By 16, you’ve reasoned out, oh look, there’s a, there’s a rock falling off a cliff. I think it’s gonna be in front of my car and I should probably stop before it’s in front of my car. I don’t have to wait for it to be in front of my car to decide. I see a landslide and it’s a good idea to stop right now.

Things like this, maturity judgment, I, I know we all laugh about 16 year old drivers, they are actually pretty, pretty dangerous. But for my previous comment, people compensate for them, right? And they learn. But the problem is they do have maturity of judgment. They’re good at knowing what the heck is that?

I think I’ll slow down. And what’s the maturity test for an autonomous vehicle? What does that even mean? So people wanna use a driving test analogy. That’s the soft spot. Which is why we get into what Fred was talking about, which is, instead of maturity judgment cause we don’t know how to measure that, or instead go for engineering rigor.

That all the different scenarios have been tried out. And, that’s part of the engineering process. So that, that was the driving test for safety case. All right. A safety case. And I’m gonna try and make it as, as straightforward as I can and stay out of the deep technical means here.

Sure. But a safety case is is some written description of, okay, so tell me what you think safe means. Tell me why you think you’re safe, and show me some data so that I should believe you. You know what, in other words, your system is acceptably safe for its intended operational environment. And you have some arguments, some reasons, an explanation, narrative that is cohesive and is substantiated by evidence.

So you can say we drove a billion miles. In exactly the same situations we’re gonna see in the real world. And in billion miles, we saw no fatalities. So with high confidence, we’re better than a human driver. Okay? That would be nice, except as a prra, it’s a practical impossibility who can drive a billion miles and all?

And did you change the software? Oh, you have to reset the odometer to zero and all these things, right? So that’s a safety case that the industry’s been selling us. That’s never gonna work. So instead, the safety case is more we’ve done rigorous engineering. We followed the safety standards we mentioned before with all the numbers.

We followed the safety standards. We’ve done a lot of testing with human safety drivers. We’ve learned to anticipate how often surprises come, and they’ve stopped coming so often, and we’re with extremely high probability, even though we’ve never seen it before. We’re really good at saying, at least coming to a safe shutdown, all these kind of things.

So a safety case is, why are you safe? Explain to me why I should believe you’re safe. Show me the data. and the safety

Michael: case is something that, it’s not like turning in a paper and then selling the cars. It’s something that needs to continue to be made over the course of the vehicle’s operation based on how

Phillip Koopman: it’s performing.

That’s a big change that’s happening. So ISO 2 62 62 requires a safety case already. It’s not as rigorous as what I’m advocating but it requires a safety case. And it’s what you said that you filed the safety case the day the car shipped and then you forget it.

And in traditional cars where a human is there to deal with all the weird stuff, maybe that’s okay because the nutritional car, you can think of all the things that might go wrong cause all of them are inside the car. Fuel pump’s gonna fail. Or you’re gonna have a computer a computer bit flip and you get some bad data, but you detect it, you fix it.

All these kind of things. You can anticipate in theory. and fix. In practice, you don’t even do that because there’s recalls, right? If they did that perfectly, there wouldn’t be recalls, okay? But when you get into an autonomous vehicle, it’s impossible to completely predict the outside world. And even if you did, it would change tomorrow with the new fashion style or who knows, right?

The world’s gonna change. So when you take the human out of the car, two things change. One is you lose that general purpose. That was weird. Let me deal with it. Device, because the humans asleep in the back, right? That was the whole point. So you lose your flexibility to deal with weird stuff the designers didn’t think of.

And the second one is you have this software that, that is not amenable. Traditional certification, right? And then the world changes. It just brute forcing it isn’t gonna work anymore. So what you’re gonna have to do is you’re gonna have to have continuous feedback from operation to say we thought it was safe on the we good faith.

Best effort. We followed the standards as far as we knew it was safe the day we deployed. So not, I’m not advocating to deploying on unsafe cars. Let’s be clear, right? They have to be as safe as you can possibly get according to all the engineering standards. And in good faith, you released a safe car and then you find out the next day it wasn’t as safe as you thought it was because there’s something you never saw in testing.

It’s like what the, kangaroos didn’t know about kangaroos, right? Welcome to Australia, right? That’s a real story that happened to one company. Okay. Or whatever or tumbleweeds. Or there was just a tomato spill and then there was an Alfredo spill the next day. That was pretty entertaining, right?

there’s all these things, , it just, I just didn’t think of this and but our car was smart enough and it shut down, but it was clogging up the highway cuz it didn’t know what to do. So next time we’ll make sure it knows what to do. So that kind of feedback is gonna be really important going forward.

And it’s gonna be, have to be for the life of the car. It’s not gonna be fire and forget as they say. It’s gonna, it’s gonna be, you’re gonna deploy it and you’re gonna have to track it for the life of the car to keep teaching the driver. Phil,

Fred: before you get to Safety Performance Indicator, I just want to tell our thoughtful listeners that you referred to something called ISO 2 6 2 62.

ISO stands for International Standards Organization and ISO 2 60 20 62 is the standard that was developed in collaboration with all of the, essentially all of the automobile manufacturers to basically define the terms and conditions by which self-driving vehicles would operate. Is that

Phillip Koopman: correct?

That sounds great, but you’re on those committees. Same as I am. So you know all this stuff as well. ? Yeah. Fred and I hang out on this standards committees , trying to make the world a better place by strengthening the standards. So what is a

Fred: safety of performance

Phillip Koopman: indicator? All right. A safety performance indicator.

The roots come from aviation. In aviation, when you are flying aircraft around, you have to do maintenance on a regular basis. You have to there are all these things that you have to get right. You have to look at how often do engines fail in flight, because if they fail a certain very low number, like once every 50,000 flight hours for in-flight shutdown, you know those kind of numbers.

And the system is designed to handle those, and there’s a reason you have two engines and all these things and it’s still safe enough, right? All that’s been taken into account. But if all of a sudden you find out the failures are happening more often than you expected, all of those, that safety case for why you thought you were safe isn’t valid.

Your engine’s failing every 10 hours instead of every 50,000. You know you’re gonna eventually lose both engines if that keeps up. And you better do something about it before the bad thing happens. So that’s the aviation background in UL 4,600, which is a system level safety standard for autonomous vehicles.

We took the idea of safety performance indicators and expanded it to not just be operations, but the entire safety case. In the safety case, here are the reasons why we think you’re safe. A safety performance indicator gets associated with each claim. Now, I’ll explain claim. Claim is I think I’m safe.

One of the reasons I’m safe is cuz I detect and avoid passengers. And how do I do that? I’m able to detect the passengers such a certain, very high fraction of the time. And then once I detect them, I’m able to anticipate where they’re gonna be a high fraction of the time. And then once I know where they’re gonna be, I either stop or I maneuver ’em around them.

And so each of those is a claim. I maneuver around them when I need to, those sorts of things. And a safety performance indicator is a number you attach saying you’re never gonna be perfect, but you’re gonna be really good. What’s the number? How often is it okay to not see a pedestrian with a camera?

The number’s not zero, because in the real world, nothing’s perfect. And you say, all right, I cannot see it. Pedestrian of the camera, once every a hundred thousand, I’m making up a number. Sure. That may not be the number right Once every a hundred thousand, but that’s okay cuz I got a lidar. And the lidar will pick up the slack.

Great. But then you say you drive around, you say, I’ve noticed that I’m not missing pedestrians once every a hundred thousand. I’m missing pedestrians once every 10. Oh. How do I know this? Cuz my lidar is telling me all these times, I didn’t see the pedestrian with the camera, but the lidar sees it.

So once every 10,000, does that mean I’m gonna have a crash tomorrow? Yeah. No, probably not. You might get unlucky, but once every 10,000 is still pretty good. But the math’s not gonna work out. You’re not gonna get 200 million miles without a fatality if your camera’s at 10 times worse than you thought.

So the safety performance indicator is a way of saying, I made this claim in my safety case and it turns out it’s not true. Let me go revisit the safety case and fix it before something bad happens.

Fred: Thank you. And my follow up question here is about state safety inspections. Now, all of us with with our cars, we bring ’em in once a year.

They check the brakes, they check the lights they check all the safety critical equipment that’s accessible to them. W we’ve been wondering how in the world are you gonna do this for autonomous vehicles? Because the safety critical information includes both the logic, the data processing, the mechanical systems in the car.

And so how in the world can a mechanic at a your local bottle repair shop be. Qualified and enabled to test this entire safety critical system. Now, when you talked about the safety performance indicators, you did a great job of talking about how an individual’s carrier experience is related to, but distinct from the s p I associated with the fleet.

So the question at the end of this complex dissertation is, would, is there any relationship between accessibility of the safety performance s for the fleet and what an individual inspector might be looking for, or used to pass judgment on the safety of an individual autonomous vehicle? The vehicle may have never experienced a particular failure forces the spi I is blink red, yet that car would be associated with the s P i that’s showing potential for catastrophe.

That’s a complex flash. Does that make any sense?

Phillip Koopman: Sure. I it. Anthony, what do you, Fred broke up a couple times. How would you like to handle that? Oh,

Anthony: sorry. Yeah, that’s a tough one, Fred. Yeah, it was weird. You broke up a couple times, then you came back and your audio sped up really fast. ,

Phillip Koopman: Do you wanna ask the short version,

Anthony: Fred? Yeah. What the short version?

I think the short version, the question was you get your car inspected every year for brakes, exhaust and whatnot, but with software, how is, Joe, the mechanic going to be able to check that what’s gonna be in

Fred: place? Sure. The other, but the other part of that is if there is a safety performance indicator that is blinking red because of other car’s experience, should that bear on the safety of the car that’s being inspected, even if they’ve never experienced a particular

Phillip Koopman: problem.

Okay. So there, there’s two different questions here. And I like the way you brought this up cuz it, it really teases apart, there’s safety performance indicators, which are, as you said, a fleet-wide indication of so problem. And it basically is telling you there’s a design problem. That the design doesn’t match the real world in some sense.

And it doesn’t mean any particular car is gonna have a crash. It, it amounts to a software defect is what it amounts to, but it’s not because someone made a mistake or someone read a bug. It’s because the design doesn’t match reality that’s going to happen on a regular basis. So the companies just have to keep up an issue, the changes and that’s just gonna be part of the life cycle of these.

But if you have a safety performance indicator that at the fleet level says, Hey, all the cars of this type have this problem, the only thing you can fix on that car is by changing the software for all the cars. There’s nothing broken on that car other than it needs the new version. And our cell phones get new software versions due to defects all the time.

So it’s gonna be like that. But the other half of the question is, how do mechanics know this thing is safe? Part of the software is going to have to be doing an internal inventory of what’s going on with the car and knowing everything’s okay. Now, we already do this on emissions. So on emissions, many of these cars have gotten to the point that the tailpipe test doesn’t really tell you, and it says something’s really bad.

They plug into the onboard diagnostics port to ask the car, Hey, what’s the, how’s the oxygen sensor doing? How are all these things doing? Because the, if the emissions are bad enough at the tailpipe to even register, things have really gone wrong. So they ask the car, how you feeling today?

That’s part of the test these days. And for safety, it’s gonna be the same thing. There’s gonna be safety performance indicators for fleet feedback, but there are also going to be car health checks that in some case, mirror the SBIs, but in, in some cases, are completely different. And the car health checks are gonna be, Hey, you have two computers and one of them you never use, but it’s there in case the primary fails and you really want it to be there when you need it.

When’s the last time you did a self test on the second computer? Let’s do a self test now and make sure that computer’s really working. So the safety inspection is gonna be that kind of thing. And the mechanic’s not going to sit there and do the test. These are all gonna be automated procedures and more likely there’s gonna be a standard readout a health report.

You go to the doctor and they do all the blood work and you get your lab tests back. It’s gonna be more like that. .

Anthony: This is interesting. It leads into my, one of my favorite sections of your book, mainly cause I love the title the Moral Crumple Zone. Because

Phillip Koopman: I did not invent that

I know you didn’t invent that. It’s great. I love it. I love it. It sounds

Anthony: like it should be a Dead Kennedy’s album. But there’s a section in there. He said drivers are told they’re responsible for safety, including compensating for design flaws and technical malfunctions that may occur, which it just seems like it’s putting the onus completely on the driver.

The driver’s always at fault. So I think you gave an example of if there’s a dirty sensor covering a camera and whatnot. Yeah. And that system goes down. So the manufacturers, are they taking this stance that, Hey, it’s not our fault.

Phillip Koopman: Answer. They’re being very coy about this topic. All right. And I’m gonna characterize this as a moral.

The issue is it’s really hard to make an autonomous vehicle that’s safe. And especially when you have a shared responsibility that there’s a human driver and there’s automated features and they share responsibility it’s very easy for the manufacturers to say, oh, that’s the human driver’s problem.

We’re gonna let them deal with it. And maybe that’s okay. Maybe it’s not. The question you have to ask is that a reasonable demand to make of the human driver? And it’s very easy for companies trying to shield themselves from reliability, to push stuff onto the human driver that is beyond what is reasonable to expect.

We’ve known since the 1940s that asking a person to supervise something boring does not work out. , after 20 to 30 minutes, they’re just gonna lose performance. And it doesn’t matter who you are or how good you are, everyone’s gonna lose performance after something like 30 minutes. And at some point you lose enough performance, you’re not safe.

Maybe it’s a little longer than 30 minutes but that’s where the number is. We’ve known that was a post-world, world study on radar operators. But we learned that in the nineties, on the couple iterations ago of autonomous vehicles. There was a big project in the nineties, the one where I got involved, where they said, yeah, as soon as you take away steering for the driver, they’re gonna drop out.

They’re gonna have trouble paying attention. So this has been known for a long time. The new kids had to discover this for themselves apparently, but if they’d asked the old stars in the nineties, we would’ve told them, all right, we know this is an issue. Okay, so what’s your plan? If your plan is we’re going to make sure the driver remains alert, and we’re gonna warn the driver.

And tell the driver, you’re not alert and you shouldn’t be driving. And they choose to drive anyway. Maybe you wanna blame the driver for that because they had fair warning, or we’re asking the driver to do some, we’re asking the driver to keep steering and we’re gonna do throttle, and we expect them to pay attention to distance because they’re already in the loop doing steering.

Maybe that’s reasonable, right? But saying, Hey this car drives itself, but if it crashes, it’s your fault. So pay attention in practice is not gonna be effective. People are not going to pay attention. Or even worse saying we promised this car will never crash except, and and so pay attention in case you need to, but you can, you’re allowed to look away from the road, but if it crashes, it’s your fault for not noticing it was gonna crash.

Huh. Which is so what some of the level three stories sound like. That gets closer to a more comfort zone that you’re asking someone to do something. That is beyond a reasonable human ask. So

Anthony: What’s the solution? GM and Ford, they have their their self-driving that’s not what they call it, they’ll operate on certain roads and they have them mapped out and they have the internal facing camera to see, Hey, are you actually facing and paying attention to the road?

, is that a decent compromise?

Phillip Koopman: Or maybe here’s what we don’t know is the camera. And so the camera’s looking at you, making sure your eyes on the road, this is an excellent idea. But what’s the frequency of the lights are on, but nobody’s home from someone who has their eyes on the road.

And I want to know that number. And if that number is no, really the camera’s effective, it’s cool. All right, then fine. And if the number is, oh yeah, that happens all the time. But we can say it’s their fault. Cause their eyes are on the road, even though we know drivers fall asleep with their eyes open all the time.

And I’m not making accusation here, I don’t know the answer to that question, but that’s the question.

Anthony: So we’d really want data from these different manufacturers so everyone can actually learn from this. And of course I’m being Pollyanna cuz that

Phillip Koopman: will never happen. Yeah. Now I know some of the engineers in all these companies and the folks who are doing that, I think they’re doing it because they’re trying to do the right thing.

So this is really good. I, I’m glad to see they’re taking driver monitoring seriously. This is super important. But, and eventually, hopefully they’ve collected data and they have this answer. And if so, great. But with our regulatory system, there’s no transparency.

So we’ll have to just trust them. And if there are bad actors who aren’t playing by those same rules, it degrades the trust for the whole industry. So that’s one of the issues.

Anthony: Got it. I think we’ll only have you for a few more minutes. Michael, what I know you had a couple questions, so you want to jump in?

Oh, and he’s on mute. He does this all the time. We made him this week. We made him buy a microphone because he’d always just sound like he was talking of a tin. Can I understand?

Phillip Koopman: Yeah.

Michael: One of the things that I was really interested in was the and we’ve covered this, but it was the with the safety performance indicators, they.

They allow for a process of, I believe you described it as de feasible reasoning, where we’re constantly able to update them and constantly basically able to continue to make a safety case as things change in the real world. And I’m wondering if, do you see any role for a government regulator like NHTSA to, to be on the receiving end of reports when certain s PIs are invalidated or proven false by manufacturer testing or in the fleet operating on the roads?

Is there is that, is there something there that could, be an early warning system that would tip government off to problems in an AV fleet even when the manufacturer might not be that willing to do

Phillip Koopman: so Well, transparency is a always good. Let me start with independence is what matters.

If you don’t have independent oversight, you’re not going to get safety full stop. That’s just, we’ve learned that in so many industries over so many years. There’s no independence. You don’t get safety. And that’s the biggest issue with the automotive industry now, is that the, I depend. If it exists at all, is very severely constrained.

It’s inside a company and there’s pressure, there’s too many pressures to keep them independent, right? What should be happening is inside the company, there should be an s p I monitor to say, Hey all these alarms went off, that your safety case has been invalid. You said this would never happen, and yet here you are.

It just happened. What’s the deal here? And to be clear, weird things will. Or it happened once in a billion miles. Okay, maybe no big deal. Nothing’s perfect, right? But it was supposed to happen once every 10 million and it happened every a hundred thousand. Wow. Something’s wrong. The world changed. Our analysis is wrong.

The world changed. Who knows? But something’s wrong. We gotta deal with it. Now that happens, it doesn’t mean you take every car off the road. You probably still have some time, but you have to take action. You have to respond. It’s like the recall system on steroids, except instead of waiting for the reports to come in and the police fatality reports to come in, you’re actively out there looking for bad stuff.

So you get ahead of it, right? And then inside the car company, they should be doing this. They should have an independence inside the car company or outside dealer’s choice there cuz we self-certify. There should be an independent group who’s in charge of keeping on top of this stuff. And I would think NHTSA would wanna see the reports being sent to the independent group.

Why? Why wouldn’t oh, something went wrong. Of course something went wrong. It’s a Tuesday. This is just how the world is. That is the question isn’t, did something go wrong? The question is your process for dealing at working? I’ve had these discussions, the interesting discussions that some people in the industry say, oh, we never wanna write down anything bad because it looks bad.

But then I actually talk to people who are much more sophisticated. I talk to lawyers who are more sophisticated, that are advocating for the car companies. They’re like no, it’s okay to write down bad stuff. What’s not okay is to write down bad stuff and then blow it off. If you have a system that’s writing down the unfortunate things that happen, and mind you, these aren’t crashes.

So you’re this spi get you ahead of crash. You’re not waiting for a crash, you’re just waiting for something weird to happen, and you get on top of it before the crash happens. That makes you the good guy. As long as you’re following up. So if I were Nitsa, I’d wanna see the flow of this data saying, Hey, look at us.

These bad things happened. It was within range of what we expected. The world’s not perfect, but we’re getting on top of it and here’s why we fixed it. This is a positive story for everyone and that’s a way for Nitsa to keep its finger on the pulse to make sure the companies aren’t staying on top of things and nobody needs to die for this system to work, which is a really stark difference from what we have.

Anthony: So more regulation, good oversight, and

Phillip Koopman: I’ll, let me, careful about the more regulation, right? I was more oversight. More oversight, yes. For regulation. Asking nitsa to convince standards for the car industry is problematic. The harnessing will tell you that it is right, but Sure. But they don’t need to.

So in late 2020, there was an advanced notice of proposed rulemaking and P RM for those who follow this, right? That Nitsa said, Hey, we have a plan for how to regulate. We’re gonna make the industry follow the own safety standards that they themselves wrote. What do you think? And that has just been languishing ever since late 2020.

But it’s oh, that, that makes sense to me. I don’t know why we’re not doing it, but it’s languishing. So that’s where we are in regulation. So if the regulation is make the industry father own safety standards, to me that sounds like a great idea. If the regulation is nits is gonna tell ’em how to build cards.

Yeah. Not so much.

Fred: I wanna point out at this point that Phil is the principle behind the standard called Underwriters Laboratory 4,600. I’ve had the privilege of working on that a little bit with Phil, and I think that if UL 4,600 is implemented as a process for certifying the safety or at least examining the safety of autonomous vehicles, they would be very little need for additional regulation beyond the requirement that companies do some kind of certification process or self-assessment process associated with UL 4,600 or whatever it is equivalent might be in the marketplace.

Phillip Koopman: Tha thanks Fred. I agree. The point of u L 4,600 is if you do everything in there, you’re probably in good shape on safety. And one of things regulators could do is say, You have to follow that standard. And that standard requires you to generate some paper and you have to show us the paper and maybe Nitsa doesn’t even judge them on whether they like the paper or not.

They judge them on whether they’re following the process and whether they got someone independent and competent to validate the paperwork and they’re just there to keep score and make sure things are happening the way they’re supposed to be. That would be a pretty reasonable regulatory approach, I think.

Anthony: All right, so full self-driving cars within the next year

Phillip Koopman: that’s not happening. , as we discussed, it’s happening in little drips and Drs and that’s cool. But it’s not gonna be coast to coast. Go to sleep in the back, wake up in a different city anytime soon.

Anthony: I don’t know. This guy keeps telling me it’s gonna happen.

. Anyway, I’m not gonna take up any more of your time. We’d like to thank again, Phil Koopman author of How Will how Safe Is Safe. Enough, an excellent book. We highly recommend it for everyone to read it. Thanks again for coming on and sharing your knowledge on this very complex issue.

Thanks

Phillip Koopman: for having me. Yeah, thank you Phil. Thanks, Phil. Thanks.

Anthony: My question, after reading Phil’s book, one of the things that popped in my head is automatic emergency braking. How do you crash? Test a car that is automatic emergency braking because in theory, if it’s working, the car shouldn’t crash.

Fred: But you can do it by looking at the rate of deceleration and the distance from an obstacle that the car achieves. So there are ways of testing it. But

Anthony: do does nitso or has intended, does Nitso or I h s, do they do those, where they try and drive the car into a wall or something like that?

With

Phillip Koopman: all

Michael: if they’re measuring, if they’re trying to cheerly measure the impact forces, they’re not going to have the automatic emergency braking on, and a lot of times the vehicle will be on a sled or it’s, it’s not really operating itself. So it’s depending on the test, there’s a lot of different tests.

But I would assume they’re going, it seems like it, to me, it would be a safety risk to have the automatic emergency braking functioning when you’re trying to crash the vehicle into something to collect measurements.

Fred: Yeah, because when the crash test occurs, it occurs at a given speed. . This somehow would have to mute the, or disable the emergency, the a e b, in order to have a contact at the speed it’s required.

All right, that’s

Anthony: good to know. And

Phillip Koopman: actually

Michael: some of the tests are held at speed or would be at speeds that are not enough of them, but some tests would be at speeds where a e b is not even functioning. We’re not even seeing a e b work over around 35, 37 miles per hour at all in any cars, which is not really what you think of when you think of automatic emergency braking.

It’s really only working in the lower speed collisions, and that’s gonna be great when we get pedestrian a e b working. But for now, it’s not stopping you from crashing into the back of a semi on the highway at 70 miles

Anthony: an hour. I know what the rest of my day’s gonna consist of.

Driving slow. crawling up in the fetal position again because all you guys just scared the hell out of me constantly. Do we have anything for the towel of Fred this week?

Fred: Yes. We were going to discuss a minimal risk condition.

Anthony: Welcome to the towel of Fred. Wait, I don’t have to do that. There’s a voiceover for that.

Phillip Koopman: You’ve now entered the Dow of Fred.

Fred: We’ll talk about minimal risk condition, which is a term that’s pivotal for a lot of the regulations associated with autonomous vehicles. So Anthony given the name of minimal risk condition, do you think it’s a condition that is associated with a minimal.

Anthony: That’s what I would assume. I think I had that on my dating profile too.

Fred: Yeah that’s a common misconception about the term minimal risk condition. So if you go to the definition in S A E J 30 16, which is available to anybody who wants it free of charge, download at SAE International or s sae.com.

It says, A stable stopped condition. Yeah. A stable stopped condition to which a user or an a d s may bring a vehicle after performing the dynamic driving task, a fallback in order to reduce the risk of a crash when a given trip cannot or should not be continued. Holy cow. That’s a mouthful, . So what’s wrong with that?

The first thing that’s wrong with that is it’s not a condition, it’s a process that they describe. So a lot of stuff has to happen and decisions have to be made. And so it’s, a condition typically is, this is hot, this is cold, this is moving, this is not moving. So it’s really a misnomer.

Second thing is that it’s elective because it says to which a user or an a d s may bring a vehicle. So it’s not a confirmed it’s not even a confirmed process that will bring a vehicle to some kind of safe harbor. And also it says it’s to reduce the risk of a crash. Now, when you think of that for a moment, if you’re in a car that’s going over a cliff and the windows are open, then as you hurdle towards the earth, if you close the windows, then you have, according to this, achieved a minimal risk condition because you have reduced the risk of a crash or at least reduced the risk of your rejection from the car.

But that’s hardly a safe condition to be in. So there are a lot of problems with this. The unfortunate thing is that most people just hear the title and they think it know what it means. So legislatures all around the country have been writing this into their regulations for testing autonomous vehicles and saying that at some point the vehicles have to achieve a minimal risk condition, thinking that by writing that into the legislation, they are requiring that the vehicle.

Achieve some kind of safe harbor if certain things happen. So if the control system fails, for example, they want the vehicle to go to a minimal risk condition thinking that means that the passengers will be safe. Unfortunately, that’s not at all what it means. If you go on to different parts of the definition, it actually goes on to say, automatically return the vehicle to a dispatching facility.

So it’s not even stopping, it’s stopping so there are a lot of problems with this. We have been very active in discussing this with the S sae. S SAE used to stand for Society for Automotive Engineers. Unfortunately, they’ve decided to go upscale. So s a e now means s sae.

They, and they call the organization s SAE International. I think there, there’s a rationale for that, but nevermind. S a e J 30 16. What we’ve done is after weeks, literally weeks of discussions, the term minimal risk condition will not appear in the next edition of S A E J 30 16. Instead, what’s going to appear is something called a mitigated risk condition, and it’ll be very specific to say that a mitigated risk condition is a safe, stopped condition period, so that everybody knows exactly what it means.

The problem that is being solved now is that the term will now be connected to what a simple English language interpretation of the title means. so people can go on from there and say that if they have a regulation that is, needs to be consistent with bringing a vehicle to a mitigated risk condition associated with a failure of some kind, then that vehicle will be safe.

It’ll be in a place where nobody’s gonna run into it, hopefully, that the passengers are, don’t have to worry about a, somebody crashing into the vehicle and that it’s unambiguous with respect to what has to happen next. So if you’ve got a vehicle that goes into a minimal risk condition, a tow truck has to be called, or emergency personnel have to be called, or something has to happen associated with a mitigated risk condition.

Now, the reason we went with that name is because in a lot of documents, mi, minimal risk condition had been truncated to M R C, and so we wanted to make sure. Because people don’t like writing all those words, it’s a real burden. Sure. We wanted to make sure that if documents refer to M rrc, it’ll still be consistent with the next edition of S A E J 30 16, and the reference will still be valid.

So that’s what mitigated minimal risk condition is all about. That’s why it will no longer be used. Unfortunately, it’s still abroad in the world and people are using it with complete and total misunderstanding. Now, a cynic might think, and I know none of us are cynic, a cynic might think that this is an intention of the automotive companies to make sure that they could basically do whatever they wanted and still say that they were consistent with a minimal risk.

Of course we’re not cynics. So we think what really happened is a lot of engineers were arguing over words and came up with a bad set of words to describe a condition that nobody really had defined very well. But that’s probably a separate discussion.

Anthony: Minimal risk condition if you have trouble sleeping at night, minimal risk condition, brought to you by George or.

Thanks for elucidating illuminating our listeners and for helping me skip melatonin tonight. ,

Fred: if you want to skip melatonin for a really long time, get a copy of the IO 26 2 6 2 6 2, and start reading that.

Anthony: How thick is that? Cause I can just smack it against my skull and that might help me sleep.

Fred: That would, it’s thick enough to do that. Unfortunately, I think it comes in soft cover, but nevermind. You can bind

Anthony: it, . Yeah, can, I can wrap it in steel ,

Fred: but thank you. All right. Thank you. Thank our listeners as always for enjoying our discussion

Anthony: here. Yeah. Thank listeners and really thank Michael for finally investing in a microphone so he no longer sounds like he’s talking into a

Fred: TinCan.

He sounds beautiful.

Anthony: He doesn’t. Thank you, . No.

 

Join the discussion