Artificial Intelligence and Auto Safety with Phil Koopman – Part 1
Hey Folks,
Fred it out for the next couple of weeks. In his place we sit down for an in-depth chat with Phil Koopman to discuss artificial intelligence and auto safety. Enjoy.
Support the Show. Support Safety.
This weeks links:
Subscribe using your favorite podcast service:
Transcript
note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.
Introduction and Episode Overview
Anthony: Hey listeners, Fred’s off for the next couple weeks. So we have a couple short episodes that were recorded with Phil Copeman. These cover artificial intelligence and how this affects auto safety. They’re fun. They’re about a half hour each, so enjoy it. You are listening to There Auto Be A Law.
The Center for Auto Safety Podcast with executive director Michael Brooks, chief engineer Fred Perkins, and hosted by me Anthony Cimino. For over 50 years, the Center for Auto Safety has worked to make cars safer.
Guest Introduction: Phil Koopman on AI and Auto Safety
Anthony: Hey, listeners welcome to the show. We’ve got returning guest band favorite Phil Koopman coming back, and he’s gonna explain to us all about artificial intelligence and safety with automated vehicles.
Phil Koopman: At least I’m gonna try. Thanks for having me back.
Anthony: Perfect. We’re gonna try and come [00:01:00] up with decent questions.
The Human vs. AI Debate in Driving
Anthony: So just before we started, I was saying my big motivation for this is that everyone hears constantly that artificial intelligence, it’s smarter than a human already. I’d rather trust a hu computer to drive a car than a human. They forget the part that, oh, it was those pesky humans who programmed the computers that are now driving the car and.
People make mistakes. Doesn’t matter where they are in the equation.
Phil Koopman: And they also forget that computers make mistakes that are inhuman and they balance out. They don’t drive drunk, but sometimes they drive pretty darn clueless. So there’s a balance there, and no one knows which way that balance is gonna go.
Anthony: So people always apply anything they see on a computer screen is it must be true. They anthropomorphize
Phil Koopman: word. Have they actually used a computer?
Anthony: But I don’t, I’ve had the issue with my credit agencies think I was born 10 years after I actually was born.
My name is different. I work in a casino. But people say it’s on a computer screen. It has to be [00:02:00] true.
Phil Koopman: Alright. Alright. So we’re gonna have a little fun. Yes. Very brief story. I tried to check into a hotel room made through a, through an online agency once, and I couldn’t find my room and I couldn’t find my room.
I’m a junior, it turns out. Don’t ever name your kid a junior. ’cause I was named before all this computer stuff was happening, right? And so I finally found the room. It was booked with last name, JR.
’cause the computer just said the last thing must be the last name and my last name must be a middle name. And it’s just there’s all sorts of ways this stuff goes wrong for things that the people who designed the system just didn’t think of.
Anthony: Exactly.
AI Mistakes and Human Errors
Anthony: And that’s what we see with artificial intelligence.
We see the common examples of being chatt, PT, and these large language models, hallucinating things and literally making things up out of whole cloth, which is. Very human thing to do, which I think is amazing. But then we apply it into cars and people are like people drive poorly. Let’s let the computer do it.[00:03:00]
Which strikes us as a little dangerous. What do you think?
Phil Koopman: What the argument is, people make mistakes, therefore computers will be perfect and as I’ve already said, they must not be paying attention.
Phil’s Upcoming Book: Embodied AI Safety
Anthony: You’ve got a, a book coming out in the coming months, fair to say?
Phil Koopman: Yeah, hope hopefully in September we’re gonna see I, all this news keeps happening and I keep having to spend time on other stuff, but I wanna get back and finish the editing cycles and it’s getting there.
As you guys have seen the table of contents, it’s it’s a real book. I just have to whip it into shape so that folks can look at it and hopefully benefit from it.
Michael: It’s called embodied AI Safety. What does that mean?
Phil Koopman: That’s every embedded is this thing that everyone thinks is old, so people keep coming from their names for it.
I’ve done embedded systems since I. I was an undergrad in college. Okay. My first embedded system was a hotel control system on an Atari 1200, and if anyone in the audience even knows what that is, congratulations on being old. [00:04:00] Oh, I’m, that’s
Anthony: not nice. I know what that is. You old
Phil Koopman: Anthony says he resembles that remark.
Okay. Yes.
Anthony: I was jealous when I had my Timex Sinclair Z 80.
Phil Koopman: There. There you go. There you go. I had the big show, but it was automating a hotel and embedded was hot for a while, and then it became a little less hot. But then cyber physical systems became hot. I. Because if you call it a new name, you can get a new round of funding from the government is what it turns out.
I apologize, I’m slightly cynical about this, but it turned out I’ve been doing it for years, but there’s new funding and they did some research and there were some new angles that a lot of formalism that hadn’t been there before. And that’s fine. It’s, that’s fine. And that’s just come and gone.
And now, because now AI is the new hotness and if I said embedded ai. People would think it’s old. But if I say embodied ai, that’s cool, right? So we’re gonna say it’s embodied AI and it brings division robots, right? It’s AI with a body. Okay? It’s like a [00:05:00] humanoid robot. But in fact it’s embedded systems with ai.
A car, a self-driving car is a robot. Meets all the definitions for a robot. So it’s, but it’s not a self-driving car book. It’s a book about if you want to put machine learning based ai. ’cause that’s what people, oh, by the way, back in the early eighties when I also, when I took a course in ai, had nothing to do with machine learning.
AI these days means machine learning. So if you put machine learning and embed system, what do you do about safety? And so embodied AI safety is really to answer that question, and we can learn a ton from what the Robotaxis folks have gone through. But that stuff, those lessons apply way past Robotaxis.
And so the idea is to use robotaxis as a platform and say, here are the things you really need to know, and then here are the lessons we can learn and here’s how the Robotaxis guys did and didn’t learn the lessons. But also these are general lessons. They apply to anything that’s gonna have AI in it.
Understanding AI: Intelligence vs. Plausibility
Anthony: Okay, [00:06:00] so let’s take a step back for a second. ’cause people hear artificial intelligence that term, the computers are smart, the computers are thinking so just very quickly. They’re
Phil Koopman: not thinking, but, okay. Yeah.
Anthony: So that’s the thing is I, we, I know, and you know that the term artificial intelligence is a misnomer that unfortunately somebody put out there in the 1950s and they’re like immediately regretted it.
Are artificial intelligence machines, are they intelligent? And let’s not get into the whole what is intelligence thing, but it depends what you mean by intelligence. Yeah.
Phil Koopman: And the thing is, if you can fake it, it doesn’t matter. And the answer is what you’re trying to do with it. There was a program called Eliza, way back, way, way back, and Eliza was pretending to be it was a chat po.
It was spooky good. It could convince some number of people that it was a person chatting to it, but it was just a bunch of linguistic acts. And I remember back when I was in undergrad, that it was fun playing with Eliza. It was a blast. But it [00:07:00] wasn’t smart. If you beat on it a little bit, you could trick it into saying all sorts of bizarre stuff we’d find out quickly.
But people who didn’t, weren’t sophisticated about the technology or didn’t have good critical thinking skills would get sucked in. And what’s happening today is that on steroids, so is it useful, is a different question than is it intelligent? It’s not intelligent by any definition of the word I’m comfortable with.
It’s a lot of sta, especially large language models, are gathering statistics and language and they can synthesize text. That is extremely plausible because that has the same statistical distribution of things it’s trained on. And for things where factual accuracy doesn’t matter, it can do really interesting things.
And so if you’re trying to write fiction. It can generate fiction. I personally think what it produces is soulless, but it can put out plausible sentences. [00:08:00] If you are trying to write an email and you’re not a polite person, but you wanna fake being a polite person, and this person has done something to irritate you, and you want a very polite email to explain to them how to stop irritating you it can really help with stuff like that, but you have to be careful because it’s just making stuff up.
Factual accuracy really doesn’t enter into it. So there’s some things. If you’re doing social interaction, maybe that’s fine. If you’re trying to say, I want the answer to a very sophisticated technical problem, and it’s not a problem that’s been solved a bunch of times before, so it can just paret back answers, you are gonna have trouble.
Now there’s a lot of hacks. There’s a lot of ways and by the time between the time this is recorded and it plays, someone will come out with 10, 10 more ways to make this stuff better. But ultimately it’s producing statistically plausible things that the statistics are based in what it trained [00:09:00] on, but it doesn’t actually have a notion of what truth is.
It knows, it’s like the stuff I trained on. But that’s a little bit different than knowing truth. So what? What’s that saying? The key is, the key to success is sincerity. If you can fake it, you got it made. You know that app that applies to LLMs, right? They’re really good at faking sincerity.
Anthony: Yeah. So they have no understanding of what they’re putting back to you. It’s very simply not understanding
Phil Koopman: in a deep sense. They in a deep, yes. Yeah. Yeah. Okay. They have some statistical notions of internal consistency, but that’s different. An understanding that’s
Anthony: totally different.
Okay.
Phil Koopman: If you can fake understanding, you got it made. Is, see, yeah. I did this setup and I did the callback. Okay.
Anthony: That was perfect. Great. Yeah, so they’re essentially zombies. But when we get into, roboto taxis and self-driving cars and advanced systems, do they, for the AI use there for, we’ll say object detection where they can see there’s some sort of object that’s crossing in front of the vehicle.
You don’t need [00:10:00] understanding as we know it to get. What it is, you need to know that this is an object, so well,
Phil Koopman: You need more understanding than you might think.
Anthony: Okay.
AI in Self-Driving Cars: Current Technologies
Phil Koopman: Please let us let, so first of all, LLM type technology is only just beginning to show up in self-driving cars.
Anthony: Oh, is it actually showing up now?
Are they
Phil Koopman: it is. People are starting to use it. So Wabi wants to use that type of technology to do the training, to do sim training simulation.
Anthony: Okay.
Phil Koopman: And other companies are talking about and I’ll just do the cartoon version ’cause it’s all over the place, you could feed a camera image and say, should I turn left or right?
And let the LM tell you which way to go. And that’s probably not a good idea at all, but there are more sophisticated things you do. And I don’t wanna get into the details here, but there, there are ways you could say tell me what I’m looking at and where the objects are, and then put that into a path planner.
Yeah. Maybe you know, that’s the kind of things people think about and, but how that’s gonna work. I’m not sure that’s early. What’s in, if you got out on the road today and look at [00:11:00] a Waymo Robotaxis, or at least as far as we know. Or even a Tesla Robotaxis, ’cause those are out as of yesterday as we record this.
My understanding with all those is they’re not using LLMs large language models to say tell me what’s going on. I’m gonna feed that somewhere else. They’re using they’re using and in machine learning or componentize machine learning. And without getting into the depth, the difference between the two, the idea is you put a bunch of data in and it trains on the data.
You show it not on every text, it can scrape off the web. And elicit, it trains off stuff. You actually feed it much data. You train it and it will, for example in the case of a componentized one, there’s a perception box. And you say, this is a person, this is a dog, this is a vehicle, this is a shadow.
And you train it on recognizing, yep, that’s a person, that’s a dog, that’s a vehicle, that’s a shadow. And then you throw in camera images and steering commands come out. And somewhere in there’s something [00:12:00] about you probably shouldn’t hit that ’cause it smells like something you shouldn’t hit. But in both cases the componentized one is easier to talk about.
In AOD ize one, you’ll show it a thousand people, 10,000 people, a million people, and it’ll get really good, scary, good cre creepily good. Almost like it understands, but here’s the catch that’s a person. But if you show it a kind of person it’s never seen before, it just makes a wild guess. It’s usually pretty bad at knowing that.
It’s I haven’t seen that before, that’s their Achilles heel. But the important thing is you say, is that a person? And it’ll say yes or no, but it’s not actually answering the question. You think it’s asking, answering. You say if it says that’s a person, it doesn’t actually know what a person is even not really.
What it’s saying is that is statistically similar to all the things I trained on that you said were people
Anthony: right.
Phil Koopman: Different than that’s a person. It’s, it smells like a person. Quacks like a duck, walks like a [00:13:00] duck, looks like a duck. Must be a duck for practical purposes. That’s a pretty good approach, but it’s different than actually knowing what a duck is or what a person is.
It’s a little, there’s a little bit differently. It’s a similarity function as opposed to a deep understanding.
Anthony: Okay.
Phil Koopman: So
Michael: How do you engineer a vehicle when you know that? There’s, there’s not a one-to-one correspondence between the machine’s understanding of reality and the human understanding.
Phil Koopman: Lots of folks just ignore that difference and call it a day. Let’s say we trained on people, we trained on vehicles, we’re good to go. And where that shows up is the edge cases. There’ll be a thing that’s a person that in, now in your, this is earlier technology, but we found that people dressed in yellow weren’t people because it hadn’t trained on people dressed in yellow and it wasn’t sure quite what they were, but they weren’t people.
’cause people aren’t wearing yellow rain coats or yellow construction vests because they didn’t train in construction areas and they didn’t train in the rain. And so they never saw that kind [00:14:00] of thing. And so what you get is weird really counterintuitive holes in the performance. And the issue you get is that when people engineer these things, especially for safety, and they anthropomorphize them, which I everyone does, that’s a human tendency if you anthropomorphize it.
You are projecting on it things you think would be hard and things you would think would be easy. And what you’re not neglecting is the things it’s gonna make a mistake on. ’cause it never occurred to you. It would’ve trained weirdly. And there’s a great example. There was a Waymo crash. Thus I, last I checked, they would say they had no injury, single vehicle crashes or no occupied single.
But they had a single vehicle crash without a person in. It smacked into a utility pole. The reason for it was it had been taught that taught, okay I’m answering my advising, it had been programmed that utility poles were okay to hit. Now that sounds weird, but if there’s a blade of grass keep [00:15:00] sticking up out of a crack, you have to the robot, it’s okay to hit that.
Otherwise, it’s gonna stop for every plate of grass. There are things that are okay to hit for sure. And in they had just, somebody made a mistake for however it happened that utility poles were, could hit. But why wasn’t it discovered in testing? Curbs are not okay to hit. And this mishap, this crash happened when it encountered a utility pole that was not protected by a ceramic curb.
It was in a parking alley and there were no curbs, right? So that’s the, and crews had one. They hit a 60 foot bus. ’cause they’d been taught that all buses were 40 feet and it, and measured the back of the bus by looking at the front bumper and going 40 feet back and saying that where must be where the back is regardless.
Even though the lidar saw the bus said no, that’s a false return. ’cause I know buses are only 40 feet long. So it smacked into it.
Anthony: Wait, that’s what happened with that case? That’s what that was, yeah. Oh my God. So that was some very that is wild because that was programmed to say, okay, buses are only this length.
[00:16:00] Yeah. Other systems in the car see that, hey, this is an object.
Phil Koopman: It’s got
Anthony: the
Phil Koopman: lidar, it’s got the camera, it sees the objects radar. No. That’s I don’t know what you’re looking at, but it ain’t the bus. ’cause the bus is only 40 feet long. Whack. Okay. And so this is tricky because people say these things.
I, I was, when I started, I was wrong, but wrong in a really unexpected way. I always said that I. People make mistakes. They drive drunk, they’re inattentive, they make stupid mistakes, right? And I said, computers will make different mistakes than humans. Because I knew that would hap something would happen, right?
Like the cruise dragging emergency tape, emergency scene tape and down power wires, dragging it down the road. That’s something that, that most, you’d have to be pretty out of it to do that as a human driver, okay? But then we saw cruises and waymo’s both going in obstruction zones.
We saw these cars. Getting in into getting stuck in cement and people correctly said humans do stupid stuff like, just like that [00:17:00] and crashing into a whole phone pole. Yep. People do that crashing into bus. Yep. People do that. So it isn’t that they’ll, and sometimes they’ll do different crashes, but many times they’ll do crashes that look like and a bad human driver.
But they’re doing it for extremely different reasons, fundamentally different reasons. And so computers fail differently than people. And it is because they, I think a lot of it is they lack the deep understanding. Especially they’re not they’re really bad at knowing They don’t know.
’cause a person would say, I don’t know what’s going on. I’m gonna slow down and figure this out. And they can reason from first principles what’s probably going on for something new. Whereas a robotaxis is just, yeah, I’m supremely confident and it has no idea what it’s doing.
Anthony: So I guess this would be a question more from Michael.
Legal and Safety Implications of AI in Vehicles
Anthony: So our whole legal framework, our safety framework has been based on humans and what Phil was just saying, a typical human’s gonna be like, I don’t know what to do here and I’m gonna stop. Or if not, I’m gonna be a maniac and do something dangerous. [00:18:00] What do you do when, when a computer system does something that is not human-like is our legal system going to change to.
To figure out what happened there.
Michael: I hope so. We already see a lot of issues with the legal system just trying to adapt to, level two vehicles. So when you get to truly autonomous vehicles, there’s a long way to go. Even in Texas, where I think they just recently passed a a new law that.
Somewhat heightens the requirements for deploying autonomous vehicles in the state. There’s still that struggle over. They’re having to modify their own state law to account for, who’s responsible in a crash, who’s responsible when there’s a citation issued, and a lot of things like that.
So the law is absolutely going to have to change, in all 50 states and federally and in traffic law, in tort law negligence. Product liability there, there’s a lot of room for change [00:19:00] and I think that’s going to, that’s probably something that’s going to extend well beyond just driverless cars and into a lot of the new interactions that humans are gonna have with computers.
Phil Koopman: I, I think it’s gonna be pretty bad if we try and fix it the hard way. Yeah and I agree Michael, that anytime I look at it as, anytime a computer supplants human agency. You’re gonna have problems if you blame the human who just got supplanted, right? And driving is just one of ’em. Now I’ve had some work as but the, listen, the listeners may not have heard this or not heard it recently that I co-authored with William Wyden, a lawyer on a bunch of this stuff.
And I think we were both on as interviews. So I’ll give this short version that was a while back, but you might wanna point it to that old episode. And the short version we proposed is to avoid breaking the entire legal system and taking 20 years of case law that gonna be destroyed. Who knows how by lobbying and all this other stuff.
There’s actually a pretty simple approach. You just say everywhere it says driver you have human [00:20:00] drivers and you have computer drivers. And computer drivers are held to the same standard as human drivers, even though computers can theoretically be better. You’re you. You’re using an existing mechanism to assess their behavior or their duty of care to others.
If you run a red light and hit someone, we shouldn’t be talking about product liability. You ran the red light. Why do we care which line of code was bad here and a red light? Aren’t we done? That’s clearly the computer’s fault. And if you, this is electronic signatures for a while.
Nobody knew if those are really signatures or not. And they said everywhere it says signature electronic counts. Okay? And so our proposal is everywhere. It says driver, computer driver counts too. And since you, in some states you can still throw a computer in jail for bad driving, but we hope those lows go away.
Texas is making strides in that direction, but they’re blaming the owner. And if the owner’s, if the owner’s Tesla or Waymo, sure, but if the owner’s, somebody who bought ’em like an Airbnb, I. They have no control over the driving code. Blame [00:21:00] throwing them in jail ain’t gonna fix anything. Right?
And ultimately it has to be the manufacturer. There’s some subtleties who that is but whichever corporate entity has the ability to fix software defects whoever does the remedies for recalls, they should be on the hook for tort liability if the thing drives like a crazy human drove.
Engineering Challenges and Future of AI in Vehicles
Anthony: So with these systems, can.
Are they set up in a way that they can figure out what happened? So the example you gave of the buses are 40 feet, not 60 feet. A lot of these machine learning systems, neural networks, it’s a black box. We have inputs, we have outputs. We don’t know what happened in the middle. Are any of these companies taking that approach from a safety engineering point of view to say, Hey, can we back trace and figure out why it thought buses are only 40 feet?
Some of this I imagine you have to guess, but,
Phil Koopman: The new hotness here is end-to-end machine learning. Okay. And large language models are the thing beyond that. And so we’re going away from transparency, the old componentized stuff we had, this is a perception box. And then you have a planning [00:22:00] box and then you have this box, a prediction box.
You have the, all these boxes. You can see if the box got it wrong. ‘Cause image goes in one end list of objects comes out, oh that’s a person. And misclassified it there. It’s actually easier to see what happened. And I think cruise, when Cruise had that crash, they’re more that way. And Waymo.
Last I heard is that way, but when you go end to end, which is what the cool new kids are doing, okay, it’s camera image comes in and left, right fast, slow comes out. That’s what Tesla FSD is these days, as far as I know. And I have pretty good info on that. And so it makes mistakes.
You’ve got, you have no idea why. Not really. There are techniques you can do to crawl back through the layers of end-to-end machine learning, but as a practical matter, yeah, nah, it just, yeah, I got it wrong. Train it some more so it doesn’t get it wrong next time.
Anthony: So let’s say the engineering’s still a thing.
If you can’t figure out why something happened,
Phil Koopman: there are some folks who are being more clever about it. I can’t speak specifically about it but I published a [00:23:00] paper before all this happened that said if you have end to end, you really wanna split in the middle and have a perception front end that tells you what it saw in a planning execution backend.
Exactly so you can diagnose did it do the right thing for the right reason. So that’s my favorite architecture. It’s been my favorite architecture since 2017 or 2018 when I read that paper. Maybe we’ll see a come make a comeback. Who knows?
Anthony: I’m rooting for it. Yeah. Michael, what did you have?
Michael: I’m rooting for that as well.
It just seems insane to me to have a black box in the middle that. There really is no ability to fix, we’re used to being able to fix cars and here it’s we can give it some more training data and just hope for the best.
Phil Koopman: I know we’re running towards the end.
Let me make my last main point and then you guys can pepper me with follow-ups. The there’s, right now Waymo, last I checked, did Componentized and I’m sure they and everyone else are looking at End ’cause that’s the thing. But componentized, you have these chunks of machine learning and you g glue ’em together with conventional software and that makes [00:24:00] it there’s a lot more labor intensive ’cause you have folks writing software to glue it together, but it makes it very easy to put it in special cases.
No right turn on red, but only in New York City. That’s a line of code. How? How do you train something with no lines of code about that rule? Especially if none of its training data was in New York City, or if it is in New York City, how does it know that New York City was the thing that made it different?
Who knows? Whereas, and then is I. Literally camera images, one end, left fast flow out the other, and the rest is just magic. Now there’s, there actually is a bunch of engineering that goes into especially even transformer technology with multiple heads that sort of fill in for these things.
So it’s a little less oeg than it sounds, but it gets pretty squishy. It gets pretty tenuous. It’s hard to manage what’s really going on in there. And for safety validation, it’s really painful. What do I expect with these technologies? The dynamic I expect to see is for end-to-end because there’s not a lot of manual engineering and because you can feed it just as much data as you want and [00:25:00] not, ’cause the component guys have to pay for every image to be labeled.
Some human says that’s a car. Maybe they have some auto maintenance system, but ultimately some human says, yeah, that’s a car. Yeah, that’s a person. And then you just threw it a bunch of data and let it figure it out. You, my expectation is end-to-end machine learning like Tesla and some of the truck folks is going to be much easier and potentially cheaper to get working pretty well.
But as a small team can take a ton of data and get it to work pretty well. So driving competence is easier with end to end, driving a hundred miles without running something. Running into something is gonna be easier with end to end. ’cause you don’t have these people doing all the special cases right.
However, safety isn’t about the average case. Safety isn’t about the first a hundred miles. It’s about the first a hundred million miles, and so what I expect is the end to end is going to have much, much harder time dealing with the edge cases that live out past a hundred thousand or a million miles, which is where, it’s one fatality per a hundred million miles in round numbers.
[00:26:00] So fatality safety lives out in the hundreds of millions of miles. End to end is really gonna struggle ’cause nobody has the machinery to train it on a billion miles in a few thousand miles, few tens of thousand miles about. And you just run outta steam to hold all the stuff. Now you can be clever, you can curate the data you’re trade on to include a bunch of edge cases but that’s where the engineering shows up.
So the engineering isn’t writing lines of code as you do in the component. It’s curating the training data to make sure it’s seeing the stuff it needs to see. So you haven’t gotten rid of the engineering effort. Back to your question, what you’ve done is you’ve transformed it from a writing code thing to a collecting data, curating data thing, which is a lot more indirect.
And you’re trying to, you’re trying to steer this thing with a really loose coupling. You don’t have a rigid pole to move the machine learning around this is analogy. Of course, you don’t have a rigid pole to move it around and put it where you want it. You got this sort of like rubber hose that’s flopping around.
You’re trying to steer it through rubber [00:27:00] hose. It’s gonna be tough. And so I expect companies using end-to-end machine learning will make impressive gains much more quickly. Than other teams and they’re gonna spend, just to pick numbers. They’re gonna be 10 times quicker getting stuff on the road and it’s gonna take ’em 10 lo times longer to get safe.
Anthony: Oh boy.
Conclusion and Next Week’s Teaser
Anthony: Alright, listeners, we’re gonna wrap this one up here next week. We’re gonna ask Phil why pretty well isn’t good enough and is this just sounds like the steroids. Of software engineering, you’re gonna get great gains, but long term not gonna work out well for you. And with that, we’ll be back next week, but for us, it’ll just be a minute.
Phil Koopman: Thanks. See you next week. Bye. For more information, visit www.auto safety.org.