Do neural networks get distracted?
No idea and we don’t directly address that question. Instead we discuss how you’ll never know what goes on inside the “mind” of the AI. Plus GM Cruise releases a report that says they failed because they think they are better than regulations and regulators, distracted driving is on the rise. elderly drivers consider signing advanced directives for their kids to take away their license and Telsa keeps calling cruise control “full self driving”.
This weeks links:
Subscribe using your favorite podcast service:
note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.
Anthony: You are listening to their Auto Be A Law, the Center for Auto Safety Podcast with executive director Michael Brooks, chief Engineer Fred Perkins, and hosted by me Anthony Cimino. For over 50 years, the Center for Auto Safety has worked to make cars safer.
Michael: Yeah, as ready as I’ve ever been.
Anthony: Great. I’m glad you’re ready. ’cause we’ve started, uh, good morning world. Hey, uh, you guys ever hear of a company called, uh, Cruise? Uh, I’ll give you some background. So it was once upon a time, there’s this guy named Kyle. He was what they used to call tech bro. Thankfully, tech Bros don’t exist anymore because they’re jackasses.
Okay. This is a fancy, so
Michael: anyway, that’s a fantasy there. Anyway, tech Bros clearly exists still.
Anthony: I know. It’s, it’s disappointing. But anyway, so
Michael: I would say Kyle’s probably still one in his next incarnation.
Anthony: Yeah. Well, uh, as we’ve talked about on this program before, uh, GM, Cruz doesn’t really exist, kind of, sort of maybe now.
And, uh, after they ran over a human and dragged her and then lied to regulators about it, they went and got caught. They hired a law firm to say, ha ha, you guys run the company and, uh, figure out what happened. And so the report on that came out and the Too long didn’t read is basically Tech bro got a tech bro.
They think regulators are dumb, they’re stupid, they get in the way. So we’re not gonna listen to regulators. What do these people know? We’ve been designing cars for an hour and a half Regulators. They’re like, I don’t know, smell like old people. Gross. Uh, what was your take? That was my takeaway from the
My take, I mean, the, the report is. Obviously, you know, it’s one-sided to an extent. It really chronicles the messages that went on between the crews employees and interviews they did with the crews employees. And, um, so we really don’t get to hear the side of the story from the folks over at NHTSA or the folks who, the California DMV or some of the other agencies that crews interacted with.
So I, you know, I would expect it’s a little, it, it, it gives a little leeway to those, to the folks at Crews, you know, some of the stuff that’s, is hard to believe. Um, you know, there’s, there’s, there’s some manipulation of the video that’s going on and there’s a lot of conversation internally about which, what, how long of videos they’re gonna show to different regulators, which is, you know, pretty concerning.
They were, they were focusing on proving that the crews vehicle wasn’t the first. Car to contact the pedestrian, but they seem to really, really wanna avoid showing the regulators and the, the events that occurred afterwards where the crews dragged the pedestrian. Um, and, you know, the, the, the engineering report that came out with this report from Exponent, you know, pin the blame, you know, firmly on the vehicle.
The vehicle was, um, didn’t think that where it had stopped on top of this pedestrian was an acceptable stopping location because they weren’t, the vehicle didn’t think it was in the proper lane. Um, and so it chose to move. It didn’t have any, apparently didn’t have any way to determine whether or not it was sitting on top of a live human being, which is a huge gap.
Um, but Fred, did you have a quick take on it as well? Well,
Fred: a couple things. Yeah. Um, the first is that. The behavior of the cruise vehicle after striking that woman was, was completely consistent with the definition of minimal risk condition, which is kind of the gold standard for what a car, a self-driving car is supposed to do after bad things happen.
So that whole structure for safety that’s built around this minimal risk condition definition is faulty. It’s just, it’s just wrong. So there’s no there, there. That was my first take. My second take on it is, uh, do you like irony? I like irony. Oh yeah. The, the report was issued in a sense, in the spirit of making sure that this investigation was building the reputation, rebuilding, trust of transparency, all those good things.
Uh, then there’s a note in there that says the published report. Let’s see. Uh, the published report doesn’t have the pages listed on pages ninety-nine through 1 0 4 that describe what Cruz has got to do to remediate the problems. I thought that was very interesting that, that, you know, and the, the report they published one up to page eighty-six, all the recommendations by the group, uh, were omitted from the published leave, from the publicly published report.
Anyway, that was my take on it. Did I miss something there, Michael?
Michael: Oh, there’s A,. No, I was, um, you know, I was kind of wondering about a few things in the, in the report. Um, you know, obviously there’s a lot of weird stuff going on with video editing and trying to show regulators. At NHTSA and in California, you know, a kind of a rosy picture.
I mean, it looks like Kyle was pushing for a very short video to be shown to regulators that would’ve only showed the, the vehicle initially striking, not the cruise vehicle, but the Nissan that was a hit-and-run driver, I believe it was hit initially hitting the pedestrian and didn’t wanna show them even the part where the pedestrian was dragged after the crash.
That is, you know, that would’ve been really, really, really bad. I think for them, if that was the only portion that showed, it looked like, you know, this is tentative, we can’t rely on everything in this report, but it looked like they ultimately had a full video that they brought into these meetings with regulators that they.
Played, but they’re, they kept for some reason, which, and this is all really strange, which to me sounds like a lot of bullshit, but they say that they had internet connectivity issues on three different occasions while they were trying to show that end of the video to regulators. I, I, I don’t really buy that.
I mean, I think Anthony probably really doesn’t buy that. It just sounds like, you know, something that was made up after the fact to explain, explain it. It seems like the folks at NHTSA and the folks in California, I. Disagree with that assessment. They, they, and, and, and it also seems like Cruz was taking this position that if they just submit the full video to regulators, then they don’t have any other responsibilities from a safety perspective.
Um, you know, we’ve talked about that here before. And the standing general order at NHTSA that requires them to report, you know, within one day and then 10 days and 30 days of this type of incident. Cruz never mentioned in the one or 10 day reports that there was a pedestrian post-crash dragging incident.
That occurred. So they were clearly trying to hide that and whatever you think about their internet connectivity problems, which to me seemed to fall in light with the, with them trying to obscure the fact that this pedestrian was run over from the very beginning of this process, um, is, is just really, you know, disappointing and it kind of shows, you know, they were literally making arguments that they weren’t responsible for reporting, uh, that they weren’t aware that they had the duty to report post-crash incidents to NHTSA, even though that is 100% spelled out in the standing general order that requires them to submit that kind of data.
And it kind of shows where their priorities are. I mean, if, if they are, it doesn’t seem like they considered post-crash safety fully. And I think that may be an understatement in all this. Um. In, in programming these vehicles. So it’s, you know, like I say, it’s a, it’s the most thorough look at, at, at the events that transpired after October 2nd, when, when this event happened in San Francisco.
I don’t think it’s the full story. I think there’s some pretty unbelievable inconsistencies that are, that are contained in the report that point to some shady things going on behind doors. Maybe that’s why GM yesterday, I think, decided to cut Cruz’s budget next year in half. Well, there’s
Anthony: Fair, fair. No, go ahead Brad.
Fred: Is it fair to say you’re throwing a bullshit
Michael: flag here? Yeah, I mean, I just don’t buy you, you see, you see the senior leadership and their, you know, legal and lobbying team kind of scrambling. I. To disassociate themselves from the post-crash incident from the very beginning of when they hear about this crash and you know, when it, when you’re filing an official report with NHTSA for standing general order, you know, in, in the one day after the event, and then the 10 days after the event, and they don’t cite this severe incident that occurred post-crash because of the way you’ve programmed the car.
If you don’t put that into writing or don’t note it for the regulators, then, I mean, I, I don’t see any other way to see this as them trying to obscure, to omit the, the existence of that event. Um, because they knew it was gonna come back and bite them in the ass. I now do if, if, now if they had been more forward about it, do I think, you know, they might have avoided some fines, but, you know, even then, you know, it’s a pretty big gap.
I mean, if you’re not considering. The movement of vehicles post-crash and nailing that down, then you have not done your job, uh, at all on safety. I mean, you’ve got pre-crash events, you’ve got crash events, but post-crash events are often forgotten about. And it’s something we’ve always worked diligently here at the Center for Auto Safety on because of, you know, vehicle fires and a lot of events that can occur after the initial contact and, and the initial crash, that can still be life-threatening.
Fred: of the issues that are related to that is that we have advocated for autonomous vehicles to have some capability for automatically notifying emergency authorities when bad stuff happens. Uh, that what did not happen in this case, the notification was by bystanders, uh, who called the, uh, called the emergency services, if I remember right.
But it’s, you know, that’s a huge gap in the AV programming as well. I don’t know how you close that gap either, but, uh.
Michael: Yeah. And, and there was a, you know, there was also a note, there were a couple little notes in there that, that I thought were worth bringing up. One of them was the, even after, you know, this happened, you know, the, the safety and engineering teams at Cruise, it raised the question whether to ground the fleet completely until they could came out with a fix for this specific type of incident.
Um, or whether to, you know, just keep going forward and, you know, Kyle and another one of the member of the senior leadership said, nah, you know, the data’s insufficient to justify shutdown of that, saying that, you know, this was an extremely rare event and an edge case. And I, you know, I, I kind of take issue with that.
It’s, this is not an edge case. This is something that happens all the time in America. Vehicles end up on top of pedestrians somehow, and the vehicles can’t be moved until emergency services arrived and the person can be extricated from that situation. That is not an edge case happens all the time. You simply didn’t plan for it and, and design for it.
So that’s, that’s completely on them.
Anthony: Well, this isn’t one of these things. I’m my big takeaway from this as well. I. Two things. One, I could believe that an internet went down three separate times. ’cause this is cruise where they’re, they caused the massive traffic jam, uh, because they, their cars didn’t have internet access anymore.
So that’s possible also how they know about post-crash regulations. They’re only a subsidiary of General Motors. And General Motors is new to the car industry. It’s only been one a hundred years. Uh, but
Michael: on really, yeah, no notice. General Motors is not mentioned at all in the report, right? So they’re, they’re staying as far away from this as possible.
Anthony: But really the big thing I see with this, and this is the trend you see with All Tech Bros, is we know better than regulators. We don’t need regulations get out of our way. And one thing that we’re working on now is a longer a kind of in-depth piece on autonomous vehicles. And one of the questions I put forward to Fred here was, Hey, why do engineers actually need regulations?
How will this help them? So. You wanna give a sneak preview of your answer?
Michael: Yeah, I mean, and, and in this case, they could have just looked at the, the consumer Aviation Bill of Rights and the one that talks about post-crash safety, just for the high-level. Look at at how and, and why this needs to be done. I mean, you, you cannot ignore post-crash safety.
That is critical sneak preview of the
Fred: influence of regulations is that every time you plug something into a wall, it works because the regulations will require that the plugs and the sockets are compatible. Um, they help everybody,
Anthony: the regulations. Good. Okay. Uh, let’s stick in the land of, uh, of tech. Um, so you guys hear Apple is coming out with a car?
Mm, I I, I don’t know why. I mean, I mean, but anyway, they’ve been talking about, this has been rumors of this for, oh, about a decade now. Not a little bit more. They tried at one point to get. Hyundai to make it for them. And Hyundai’s like pass, I think they tried another manufacturer and they’re like, pass.
And so their thing was, hey, it’s gonna be a full, fully level. Five drives itself, makes you coffee downloads apps for you. It’s wonderful. And now they’ve scaled back their ambitions as they’ve spent, you know, a lot of money and a lot of time trying to figure out how to make this work and realize, hey, this is just gonna be a level two plus vehicle.
The plus means, I don’t know, uh, but briefly, a level two vehicles is no different than what I bought four years ago. It’s just, it gives lane keeping assist and, uh, uh, what are the, the automatic, uh, Cruise control where all adjust speed up and down. So from a company that brought you face computers, um, this seems, uh, not that ambitious or is it just a, a nice highlight of this is a really, really hard problem.
Michael: I think it’s that, that they were overly ambitious from the beginning because you know, from the start, this was a pitch along with the other guys who were trying to do this, Google, Amazon, Microsoft for a driverless car. I mean, that’s what they were going for, and now they’re seeing the complications and the expenses that are involved and actually getting to that point and.
They’re saying, whoa, we’re gonna, we’re gonna drop back to this non-existent thing called Level two plus where we can put technology in cars and then continue to blame drivers when they do something wrong. But I, I have a probably more cynical view on this whole thing. I don’t know that Apple really wants to be in the car game at all.
I certainly don’t think they want to be into the actual process of manufacturing an entire car on their own. I, I think where they really want to be is in front of your face every time you turn on your car. And that’s what they were hoping to do here. And that’s why, you know, I, I, I think they don’t really see a giant future in the, the level four, in the robo taxi space.
I think they probably see a lot of what we’ve said, which is that there’s just not this massive use case. That that’s being advertised by the people creating this technology. There, there aren’t people clamoring to hop into one of these vehicles other than for maybe, you know, a, a, uh, tryout or people who are enamored with the tech wanna hop in and do something cool.
Um, you know, they, it’s just not something that’s gonna take off. But everybody’s car is going to have a lot of technology stuffed into it over the next 10 years. And wherever Apple can get their toe in there and generate some, some, some more, uh, profits, they’re going to do it. My
Fred: take on it is a little different.
Uh, and, and I do own Apple stock. I just want you to know a small amount of Apple stock, so full disclosure. But I think perhaps they’ve looked at it and like Michael said, have said, holy shit, this is, this is bad. Why should we pour money down a rat hole when there are really much slower ways to lose money?
I mean, they’re making 50% margin on everything that they put out. Which is, you know, enormous amount of profit for little metal boxes that they sell. Um, why would they want to pour billions of dollars into a business where they’re gonna get 10% margin? Ultimately that’s, you know, that’s what the car companies do.
Yes, there’s a massive amount of sales, 10% margin comes to a lot of money. But Apple’s already a pretty big company, and I, I didn’t know if I were running Apple. And there’s probably a lot of good reasons why I’m not. I’d look really hard at putting billions of dollars into a low margin business that has lots of competition.
It seems like a bad idea to me.
Anthony: Okay. That’s two votes against the Apple car.
Fred: Um, but Oscar, Mayer has a car, so, you know, maybe there’s, maybe there’s something there as well.
Anthony: Uh, it, it could be something there. You, you never know. Uh, sticking along the lines of tech companies pretending to be auto companies.
No, that’s too cheap of a shot. Tesla. Uh, Tesla, uh, has something called full self-driving. We’ve talked out a lot about in the past, full self-driving, which is just, you know, lane keeping assist and automatic cruise control. But they say, Hey, it’s full self-driving. And people believe that and they, uh, let their car crash without them paying attention.
Crash itself. Crash itself. Exactly. So it, uh, they came out with version 12 of their software and now this is something they’ve been selling since 2017 or so, and since 2017. You pay them a ridiculous sum of money and they keep selling you beta software, which is, you know. Makes Apple’s business model look stupid.
Uh, and, uh, so version 12 came out, and this is part of NHTSA saying, Hey, you guys gotta stop this nonsense, and you actually have to warn people, Hey, you gotta pay attention. You need to be fully engaged as the driver. And so they went and did this. And Tesla owners are just like, yeah, I don’t like that. The car’s bothering me now.
But also from our perspective, we’re like, yeah, this is a really crappy version of doing this. Um, so I, I don’t know. I, I’ve never been in a
Michael: Tesla. Um. You know, the, I think the interesting thing here about full self-driving is, you know, they’ve continued to promise it and promise it, and I think when they first started offering the full self-driving platform for purchase, they, you know, they had somewhere near half of of purchases, people were buying it.
They were lapping up the Elon, Kool-Aid, and, and, and plunking down another, I don’t know what it was, 10, $15,000. There’s also a monthly fee, I believe, if you wanna go that way, but. Now the, the uptake seems to be very low, like, uh, maybe even under 10%. Um, so I, I people are, are, are, wise, wise to the, to the wool that’s being pulled over their eyes on this one, I think, you know, we still have, you know, a small number of Tesla buyers who are drinking the Kool-Aid.
But it, it, it’s, I guess, encouraging to know that not everyone who buys a Tesla and not even a majority are believe that this stuff is, is actually working and, and keeping them safe. Um, but I mean, that raises the question, you know, which I believe is raised in the article here, you know, is, is this, you know, the last hope for Tesla’s, self-driving?
I mean, there, nobody’s buying it in their new Teslas. Now, why are you even offering it? Why are you continuing to sink money into it and develop it if it’s, you know, not something your customers think they can trust and not something they’re really interested in? Um. And, you know, without going too deep into all of the problems that we see with full self-driving, you know, it’s, it’s literally just another one of these bells and whistles that Tesla is offering on their vehicle that, you know, along with autopilot.
And some of the other things that have come along are, are just, you know, shiny objects intended to attract buyers and to separate them from their money. Um, and the promises of what those, those systems are going to achieve often fall well short of what’s advertised on real.
Anthony: Fred, you’re muted, but I, I can see you’re very animated about what you’re saying, about how much you love Elon Musk.
It was weird,
Fred: but too many buttons. I’m an older person. It, it’s an interesting sales pitch by Tesla, which is that they’ve got a lot of performance offer, right? That’s one of their big things. They go zero to 60 in three nanoseconds, and even when they’re towing a Porsche nine 11 and all these wonderful things, they’ve got tight steering.
They can go around corners really fast, and then they come along and say, yeah, but you don’t really wanna do any of this. We’re, we’re gonna, for additional money, we’re going to offer you the prospect that you’ll never be able to take advantage of any of this high performance stuff we put into the car, because I guess we at Tesla think you get bored with it really quickly and you want to go to sleep.
I, there’s really two different and nonconvergent strains in their marketing. One is that, yeah, you, you know, you, you can really smoke everybody else on the street and the other is, eh, you can go to sleep and not worry about it. I, I, I don’t know, I, I see that as a conflict, but maybe there’s complementarity there that I just.
I am not clever enough to find,
Anthony: I think that’s a new racing format, a sleep drag racing. Ooh, I like it. I like being get behind the wheel and whoever, uh, passes the finish line while asleep wins.
Fred: Wake up the next morning. You find out if you won. That’s a
Anthony: great idea. Exactly. Hopefully you stay away from Twitter while you’re on ambient Ah.
So, uh, we’ve discussed this before. What’s the most dangerous thing you can do behind the steering wheel? No, it’s not what you’re thinking. It’s playing with your cell phone. That’s right. Distracted driving. We had the sheriff on from Massachusetts a couple years ago, and I asked him, I was like, what do you, what’s more scary to you?
Uh, Teenagers or drunk drivers? And he did not hesitate and said people on their phone, which was surprising to me. But now there’s a report from the Insurance Information Institute. Um. And part of this says, uh, which is mind blowing to me from the report, a total of two and a half percent of drivers stopped at intersections.
Were talking on handheld phones at any moment during the day in 2021. Wait, so not only do you get pulled over, but you’re still dumb enough to have your phone out. Like,
Michael: well, what I think the, is that what I’m reading? That those come from observational studies, I believe, where they have, you know, basically someone sitting in an intersection doing a survey of, of watching vehicles that are stopped in at the intersection, so they’re not actually pulled over and still in their phone.
And, and at the point that you’re pulled over your vehicle stopped beside the road, then I, I, you could probably pull out your phone without any penalty. So I, I think that’s the observational study.
Anthony: Uh, but yeah, this is part of the whole problem. We’ve talked, discussed with distracted driving, um, that it’s increasing dramatically.
Uh, and this is obviously causing everyone’s insurance rates to. To go up. And so it’s not just talking on your phone, it’s people texting while driving. Heck, people browsing while driving, which is just, I mean, I’ve seen it in other cars. It’s, it’s brightening.
Michael: Well, and, and I think the really frightening thing about the, um, kind of the report they put out is that it’s Distracted driving.
I, it’s not going down at all. Um, it’s, it’s going up and, and it’s not going up only because of cell phones. Cell phones are the number one problem, but there’s also a lot of, I. Technology in the car that, that is not being put in there thoughtfully. You know, there’s a lot of distractions with your, um, navigation system.
You know, just think how far you have to look over away from the road to change your air conditioner. And that’s whether you have a screen or not. It’s a lot easier once you learn how to do it with, uh, some time in the vehicle. But, um, with some of the touch screens, it, it can be even more difficult ’cause you don’t have a physical button that you can rely on anymore and you, you can’t really reach out and feel the button.
You have to look at the screen and determine exactly where it is. So it’s a, it’s, it’s, the, the scary thing about this is that, you know, yes, you know, handheld cell phone use and some things have gone down, but. There are more things being put into your phone that people are using while they’re driving.
There are more things being put into the car that people are going to be used and potentially strapped in with while driving. And on top of all that, you know, the, the, the cost of insurance is, is going up significantly. And I think there are a lot of reasons behind that. Everything from, um, you know, electric vehicles being super expensive to replace, to crash parts going up significantly because of all the sensors and electronics that are in cars today.
But, um, kind of the. The interesting part of this report, even beyond the distracted driving, which appears like we’re not really getting it right, we’re not getting a lot done, hopefully driver monitoring or something will come in the next decade to start help helping to solve that. But the thing that’s really interesting is that something called the insurance industry’s personal auto combined ratio.
Now that’s basically a zero to a hundred scale or a zero to 200 scale. I, I, I’m, I’m not the greatest numbers guy, but basically I. Right now the insurance companies are paying out more in claims than they’re taking in from the insurance premiums. Um, which means that, you know, the only way those insurance companies could stay afloat in that environment was with their other investments.
They’re not making any money directly on the insurance right now because of all these cost pressures. And, you know, that can really only result in ultimately one thing. As, as this continues to move along with both distracted driving. If we don’t get that under control, obviously vehicle costs and crash costs are probably not going down significantly because more electronics and more expensive components are continually being added to them.
So I, the, the outcome of this I can only see as being higher auto interest, auto insurance policy rates for all of us. Um, so that’s. That’s a, that’s on top of the problems caused I distracted driving. That’s also a, a big negative in this area. And I, and you know, I think that’s what the Insurance Information Institute is, is trying to point out in this report is that there’s some se serious and significant cost effects here that are threatening its business and the entire industry and our consumer pocketbooks.
And a lot of it has to do with the, the stuff that’s going into our cars and being put there by Manufacturers, but also being demanded by, by consumers who, who want
Anthony: some of this tech, you know, who, uh, doesn’t get distracted while driving and playing on their cell phone, GM Cruise because it’s too busy driving into wet cement.
Um, yeah, it’s, uh, has there been any studies on, on hands-free, uh, using your cell phone? Like, so if I’m driving down the street and I can tell my car, Hey car play the latest Mastodon album and it does that, does that. Is there any studies that that make things more dangerous?
Michael: I mean, if you’re using it simply hands-free to turn on music like that, um, that’s a pretty quick I.
Transaction. I’m not sure if there’ve been any studies on that. There have been studies on hands-free conversation if you’re conversing with someone, whether with your hands or just on your Bluetooth in your car. And while it’s maybe marginally safer, it’s still risky because, you know, the, I guess the, the, the cognitive processes that it takes for you to carry on a conversation are, are just, are distracting you from the cognitive processes you need to be in place to drive safely.
I don’t, you know, any distraction is a distraction. You know, there’s, there’s a baseline. You know, it’s kind of like when, when I go on long trips, my daughter thinks I’m weird, but I don’t turn on music. I don’t really like the radio. I just kind of want some peace and to drive along with nothing going on.
Um, it’s not really because I don’t wanna be distracted or I’m some kind of Anti-Distraction warrior. It’s just because I enjoy, I enjoy the peace and, um. Is
Anthony: this just a long-winded way to say you don’t wanna talk to your daughter on a long road
Michael: trip? No, no, no. If she’s in the car, we talk, we talk. But if, if it’s just me in the car and she, she actually forces me to play music.
But if it’s, if it’s just me, I, I prefer no distractions. And, you know, it’s, you can be distracted by virtually anything, you know, from a, a, you know, dropping a cupcake on the floor, uh, eating a cheeseburger, trying to dip your fry and some ketchup. The music you’re listening to, I mean, wait, wait,
Anthony: wait. Let, let’s, these are very specific examples you gave here of dropping a cup.
Yeah. I thought this
Fred: was an interesting profile. Profile of your driving behavior.
Anthony: what behind the wheel? Mr. Brooks.
Michael: There are a, you know, I, I have known families who’ve lost, you know, children to crashes where, you know, the kid was eating a cheeseburger and for whatever reason, that distract them long enough to, to, it doesn’t take long, right.
It, it takes the millisecond or two if, if, if you’re doing something pretty poor, if, if you’re steering, uh, if you lose control of your steering or anything like that, it doesn’t take long for these things to happen. And so even the most minute distraction can cause a problem. So, you know, there is no good distraction.
You know, I, it’s hard to say that some distractions are better than others. That’s, I guess that’s what I was getting to with the whole cell phone thing. Whether it’s handheld or you’re using your Bluetooth, you’re still. You’re still multitasking. Yeah. You’re, you’re doing two things at once and you know, that’s the problem.
People are, and, and people are starting to treat their cars more like they’re campers, you know, they’re trying to load them with movies and video games and features to let the car handle the, the, the safety business for a while. So I can just relax. Um, and it seems like everyone’s really push, or at least a certain subset of the population is really pushing for those type of features.
And, and inevitably they’re, if, if, if systems aren’t put in place that can monitor drivers and ensure they’re staying on track, those are gonna cause more safety problems. Hmm.
Anthony: Um, so Fred, you mentioned earlier that, uh, you know, I, I’m, I apologize in advance for how I’m gonna phrase this ’cause I, I don’t have the exact recall of what you said, but basically you said, ah, there’s too many buttons.
Uh, I’m an old person. When you were on mute there Fair enough. Roughly.
Fred: Self-deprecating humor, but yes. Yeah.
Anthony: Okay. Yes. So there’s an article in the Washington Post, um, about aging drivers refusing to give up the keys. And it starts off with, uh, a guy who says when he turns sixty-five in four years, he’s going to sign an advanced directive for driving.
Which basically, it sounds like it’s gonna give his kids the ability to say, dad, you can’t drive anymore. Um, we’re taking, I mean, we’re taking away your ability to drive. I don’t know how that’s enforced, if this is just kind of a, it’s,
Michael: it’s not enforced things. I don’t think it’s enforceable. Um, but it’s a,, you know, basically what it says is, Hey kids, when you see me reach a point where you know you’re afraid for me going out on the road, that I may be a threat to myself or others, then um, I.
They, they can come to me and say, Hey dad, you’re putting the keys down. Right? And that’s it. But the problem is that, you know, these, these aren’t enforceable yet. I mean, I’m suppose you could create a, an enforceable contract of some sort that would allow for this to happen. Um, but it’s, it’s, it’s a huge issue.
And, and, and my big biggest question about this is why are, you know, why are children having to monitor their parents and make a value judgment on whether that parent is capable of driving? I mean, are, are the children good drivers? Are they equipped to make that assessment? You know, and, and this goes back to a big issue, which I think states really need to step up here and really need to start ensuring that, you know, not just the elderly, but young people and people in their, you know.
The prime of their life from whenever that is, to whenever that is, can, um, be tested frequently. Maybe not every year, but every five years when you knew your license. Maybe after you reach a certain age where cognitive decline is more likely to occur than you have tests that occur more often. But it’s really difficult for me to think that that, uh, a directive type setup is what’s ultimately going to work here.
Um, you need better, you need the ability to enforce it, and you need this, you know, you functionally need the state to come in and take someone’s driver’s license away. And many of these situations, and in some situations, I don’t even know if that will stop some folks from driving.
Anthony: So I remember it was in the late nineties.
It was ABC, I think it was like primetime. They had a, a special about aging drivers. And one of the things they had was showing that the, as you age, your vision field of vision collapses or, um, there’s these huge blind spots in it. And they had this one scene where this driver’s going through an intersection and not being able to see anything at his left side and almost getting slammed in the, uh, going through the crosswalk.
But don’t, don’t, various states don’t they require certain tests after a certain age or, or No.
Michael: I mean, there may be a very few of them that do that. I thought New York did. New York would probably one of the ones I would guess would have that. Um, and I’m sure some states do, but you know, just like with safety inspections on cars every year, you know, you’d think the state would have an interest in making sure that the vehicles operated on state roads meet safety requirements.
Yet only 15 states actually have yearly or annual safety inspections. Um. It’s similarly here, there may be a couple of oddball states that have some type of senior, you know, cognitive evaluation that they provide on a, on a, on a, you know, it might be a better system, might be states having a system where these type of things can be reported.
You know, a caregiver, a child, a friend, another relative can report. You know, maybe some people don’t like to be con Confrontational about this and maybe they’re not getting anywhere with that person, you know, report a person to the state ’cause you, you’re scared they might be a threat to themselves or others.
And then they, they’re required to perform an evaluation or to participate in an evaluation of their driving abilities. I don’t think that’s a bad thing. And, you know, it might be a pain in the ass for some folks. Um. But frankly, if it, if it saves lives, it, it should, it should be on the table. And it’s an area where states just have long, not really taken up their authority to, to regulate the Licenses and, and the testing that they do to ensure that citizens can responsibly operate vehicles.
Fred: So Fred, when I was in my thirties, I thought that was a great idea. Now that I’m no longer in my thirties, I’m getting a little more circumspect about that.
Anthony: Yeah, that’s what I’m curious about is, is when you read this article, you’re like the advanced driver directive. I don’t know how, how, how much do my kids love me?
Like what do you think at that point?
Fred: Yeah. Well it’s, you know, it’s a Mortalities in, just leave it at that.
Anthony: Oh, I’m sorry. You, you, your internet had a GM cruise problem there for a second, and I only got mortality.
Fred: Mortality is a daunting concept. Ah, and uh, fair enough. It has a lot of implications.
Anthony: Well, no one else has a lot of implications.
Your donation to the center for Autosafety. That’s right. If you go to Autosafety.org, click on the donate button. And I won’t say that again during the rest of this episode, but we will go on to the world famous Towel of Fred. Fred, you’ve now entered the, today Fred think the topic is defects and artificial intelligence.
How could there be a defect in artificial intelligence? I mean, it’s like, that’s artificial man’s a great
Fred: question. That’s a great question. So, um, Tesla has announced that they’re no longer going to have procedural code that controls the, uh, steering of their car. They’re going to go with something called a neural network to control of the steering of their car when it’s not automatic controls.
So it is hard to visualize what this means. So I’ve, I’ve, I’ve thought of a visual that might be useful for people. Have you ever seen that murmuration, a murmuration murmuration is when you have, I have murmuration. Yeah. That’s when you have jillions of starlings, for example, up in the sky, uh, performing basically acrobatic feats and more or less synchrony and you, so you’ve got thousands of birds up there, and somehow there’s this amorphous blob that’s evolving.
It’s turning, it’s a lot like neural network because there’s no one bird that determines what’s going to happen. There’s, there’s no single command that causes that big blob of birds to do any particular thing. Yeah. Somehow all of the individual inputs from the birds based upon their programming in relation to inputs, whether that be a hawk or the topography, who knows what somehow causes this collection of birds to perform a collective maneuver.
It’s kind of the way neural networks work. You’ve got a lot of really small decisions that you’ve trained, um, inside of a computer somewhere. And so a, an input will come in maybe from a picture or a camera or something like that, and it gets processed by all these individual, uh, decision making neurons or artificial neurons, and somehow you come out with a, a collective judgment of what this thing was.
It’s kind of like the mass of the center of mass of the birds moving in a certain direction, right? They’ve, they’ve all processed this information, they’ve all looked at the inputs from the other birds, and somehow they make a decision to turn right or left. Uh, it, it works for them. But if you were a landowner underneath that area and you were concerned about the distribution of bird droppings, you would want to know, well, you know, how are these, how are these things going to affect my land?
How are they going to affect the bird property? Particularly if you had a Piggly Wiggly by the way there, you’d wanna know how that was going to affect the distribution in the parking lot. Um,
Anthony: bird droppings, now a Piggly Wiggly
Fred: bird. So it’s, you know, there’s, it is really hard to figure out what’s going to happen.
In fact, it’s impossible for any human to figure out what’s going to happen with this collection of birds from time to time, from moment to moment, right? You never know. There’s, there’s strange processes or impenetrable going on. Neural networks work pretty much the same way. No human being can understand all the intricacies of how these decisions are being made within the neural network.
Uh, and yet somehow the neural network has come, come up with a definitive command that says, you know, turn the wheels one way or turn the wheels another way. Right. That’s what the steering’s gotta do. So from Tesla’s perspective, uh, and Tesla is the company that’s promoting this, it’s gonna save them a lot of money because they say they’re going to save 300,000 lines of procedural code.
C++ code C++ has its own problems. Of course, object-oriented code, as we know, has a lot of, uh, problems with the heritage and understanding Exactly. But that’s another issue. Completely. Saving 300,000 lines of code is gonna save them a lot of money, but it also means. That it’s going to be extremely difficult, if not impossible for any person to definitively say, this is what caused a collision.
Because there’s a, the process in there, a black box process that nobody can really look into to say, you know, here’s exactly what’s going on. It’s just, it’s like this big blob of birds, right? You know that the big blob is gonna move one way or another. You don’t know exactly how, and you don’t know exactly why.
It just does. The neural networks work the same way, so this may work for them, but they’ve made a judgment within Tesla that they have a, created enough miles, enough images, and enough between the images and the steering behavior of these cars that they know what to do. The problem is a problem is whenever they confront the situation for which these systems haven’t been trained, they have no way of knowing what’s going to happen.
And crashes almost by definition, are unusual circumstances. If anybody can say, well, I’ve got a system, or a system, it’s always gonna cause a crash. Typically, you’ll fix it, you’ll put a curb in, you’ll put a traffic sign in, you’ll do something because it’s, you know, it, there’s a high probability that something’s going to happen.
But with a neural network, um, unless, unless you’ve got lots and lots of crashes that happen the same way with identical circumstances, you’re not going to be able to train the neural network to respond to these unusual circumstances. So there’s two parts to this. One is that the existence of the neural network, controlling the steering of the car makes everything really fuzzy.
And you don’t, no human being will ever know exactly what it’s going to do. But the other side is, and Michael, I’d like your observation on this from the legal perspective, boy, it’s gonna be impossible to really look at the responsibility of the car and its contribution to the, to, uh, a collision or a crash because there’s no deterministic way of saying that A caused B,, right?
Yeah. There’s a, you wanna have a logical system that says A caused B,, but if A may be caused B and we didn’t know what the hell is going on, but b happened, how do you defend that in a courtroom? How do you, how do you built a legal case around that? So anyway, that’s, you know, it’s hard to talk about the neural networks ’cause they’re really fuzzy.
It’s a really strange concept. But think of that big mass of birds floating around doing something that’s synchronized based upon the inputs from all these individual birds. Uh, the seams organized. Maybe it is, maybe it’s not. But somehow the flock responds, does a certain thing, and I don’t know. There are consequences.
There are consequences from the inputs of the outputs, but it’s really hard to know how a travels to B.. Yeah.
Michael: There’s, there’s a, there’s a couple of big problems here. The legal one that Fred brings up is, is a, a very difficult one to resolve because there essentially you can’t explain in language, I guess, what the, um, what, why, why the AI is making certain decisions, why the neural network is, has done what it’s done.
And when you put that type of system. Ultimate control of the vehicle’s steering or, or anything else the vehicle’s doing, how, you know, that’s going to eliminate things like crash investigators. How are they gonna determine, you know, they can, they might be able to see what happened, but they won’t know why it happened.
And assigning fault or blame in those situations is going to be very difficult. I mean, I think we need to see some, some type of, you know, it’s, it’s similar to with the autonomous vehicles, some type of duty of care there, where if there’s something that, that goes haywire in the AI that causes a crash, whether you can prove it or not, it, it’s going to be just assumed that it’s the fault of the manufacturer.
Um, and there’s a lot. A lot of, um, beyond just cars. I think there’s a lot going on in AI as far as explainability. Um, and how do you break what the mach, what the computer intelligence is doing down so that a human intelligence can understand it. Um, and then the, the second thing that’s really concerning here for, from our specific interest in vehicle defects, if you have an a, a neural network that is making the wrong decision continuously and to the point where we’re like, okay, this is clearly a defect in these cars.
You know, we’ve had 10 Tesla’s run hit a tree that’s got purple paint on it, um, in the last month, but we don’t know why. We don’t know why it’s doing it. Well, if you’ve got a car that’s been programmed by humans with software, they can dig into the code and ultimately figure out why that’s happening, change the code and move on.
With the AI or the neural network, they’re never gonna be able to figure it out. And, and, and the only real way to fix that is to try and retrain the neural network to do better in the future. So that leaves everybody on the road and danger of the, you know, initial problem and whatever training has to go on, you know, it’s not like a fix, it’s a, it seems more like a process or a,, you know, you don’t really know the outcome.
You’re, you’re training something, hoping that it achieves a better outcome in the future, but you can’t guarantee that like you would if you were simply taking out the bad code, putting in better code to, to figure out those situations. So it’s going, it’s, you know, it’s AI is opening up a whole lot of different possibilities in the future.
And one of them, you know, in, in relation to car crashes and, and defects, it’s, it’s, it’s. You know, going to be very confusing for, uh, a lot, hopefully good AI lawyers and good I defect investigators come out to fix this problem because humans may not be able to,
Fred: oh, humans will not be able to. And, and there will never be an AI mechanic.
You can never bring your AI defective car into a gas station, say, Hey, fix this algorithm, will you, bud?
Anthony: Well, that’s an excellent point. And what Michael you’re just talking about is, so if there’s existing software and there’s changes made to it, there’s processes in place that you can see, oh, this is what changed.
This is why it changes why we did it. Same thing with mechanical systems in cars. Um, hey, we replaced this part. This is why we replaced it. This is what we did. We were hoping to fix something, save money, blah, blah, blah. Try something out with ai and, and I can’t stress this enough to every person on planet earth.
Artificial intelligence does not mean intelligence. It does not mean it’s intelligent like a person. This is unfortunate, an unfortunate Misnomer that came about in the late fifties, early sixties. It’s a bad name. It is
Michael: a bad name. And I wouldn’t say AI anymore if that wasn’t what’s commonly used. Right. I would not use the term because it’s misleading and it, it’s so many different things under one umbrella that it doesn’t make
Right? And, and at the start of ai, they knew it was a mistake. Oh no, this is a mistake. ’cause they’re still like, well, I mean, even to this day, we’re kinda like, well, what’s really intelligence? We can’t really figure that out. But beyond that, so people, you, you can’t make this assumption that artificial intelligence, it’s smarter, it’s smarter than me.
It’s on a computer. I don’t know. No, you, you were vastly more intelligent than this. But what we’re talking about with neural networks is, is exactly like what Fred was talking about. Whereas there’s this mysterious part in the middle that you’ll never know what happened. It used to be these very simple one-layer systems where it’d say, Hey, here’s a picture of a cat, and then it goes to this neural network and on the outside says, yep, that’s a cat.
Oh, that’s not a cat. And now they have deep neural networks where it’s multiple layers of these things, of multiple instances in the middle where there’s multiple places of, I don’t know what’s happening. So ninety-nine times out of a hundred will say, yes, cat, but we don’t know what’s gonna happen that hundredth time.
And you’ll never know what’s gonna happen that, that, that random time because you don’t know how it works. And like,
Michael: do you wanna, do you wanna put your, that in charge of you in a vehicle? Like it’s, it’s, at some point your number comes up. Right?
Anthony: Right. And there’s, there’s definitely huge benefits to these types of systems as an add-on.
But, uh, it’s, it’s, so, um, I. It’s so obfuscated at this point to say, yes, this is a safe system, right? Yes. You can trust your life
Michael: with this. Yeah. It’s a trust exercise.
Fred: I’m gonna with you a little bit, Anthony, because you said Sure. You don’t know what’s going on. The fact is you cannot know what’s going on.
Sure. It’s not a question of your will, it’s just not penetrable by human beings,
Anthony: right? Yeah. Yeah. It’s, it’s, uh, and this is something that’s been known for forever. It’s not like this is a new problem. Um, so, hey, have fun playing with chat GPT. Have fun playing with these image processing things, but realize at the end of the day, these are just kind of toys.
They may be helpful toys, but they’re not this magic solution, um, that’s gonna make the world better. Uh, what? Simply put anything that comes outta Elon, Musk’s mouth realize, just run the other direction. Okay. I think that’s fair to say. If he says this, you go somewhere else. You just run. Okay. Especially when it involves automobiles.
Uh, we, we’ve already covered a, uh, a recall for her Tesla’s and how that upset everybody. So let’s cover some other recalls. How’s that sound? That sounds great. Scrap it. Oh, look at that. What do we got here? Tesla rear camera issue. Um, yeah, I mean that, this is, this is the double points. This is, uh, almost 200,000 vehicles.
This is the 2023 model S’s. I’ll actually know it includes model S, X and Y. Huh? What is that? Spell, uh, vehicles equipped with self-driving computer 4.0 and running a software release version. Blah, blah, blah, blah, blah, blah, blah, blah, blah. Um, basically the, uh, the rear-view camera will. Fail on you, causing increased problems of accidents, injuries, maiming, uh, support of Apartheid South Africa.
Okay. That last part I just made up, but, um, they’ve already released an over-the-air update, I think, to fix this. But hey, just because you didn’t go into a dealership doesn’t mean it’s not a recall, right, Michael?
Michael: Yes. And and this one’s interesting because this is one of those recalls where, you know, they built the vehicle and sold the vehicle and everything was fine, and then all of a sudden at a certain point when they released a certain software update, your, your rear view camera, um, starts failing.
So this is kind of an, an, an introduced defect. One of the. New style defects that we don’t see in, in traditional cars. ’cause it, it, it wasn’t there when the car came out of the factory and it wouldn’t have been there had there not been an update to the software that screwed something up. So, um, it’s a whole new world for defects with, with these type of systems in play.
Anthony: Okay. Uh, that’s the only recall we have, but then we have a bunch of investigations and closed. Here’s one is, uh, Nitsa closes a probe in a Dodge Ram rotary gear shifters without seeking a recall. So it’s not always a recall. Uh, there’s a seven-year investigation into complaints for dodge and Ram vehicles.
They can roll away after being shifted into park. What happened there?
Michael: There was a very, you know, a very, a lot of confusion around how to operate the, basically a dial that was being used as a shifter. Um, you know, this was not just in the Dodge and Ram vehicles, it was also in the, the Jeep Grand Cherokees and result of the death of a well-known actor in Star Trek Anton Yelchin.
Um, he was Par, thought he was parked, I believe, in a driveway, and, um, got out of his car to do something and the car moved and pinned him up against a gate or something and killed him. Um, and the investigation was opened, um, into all of those shifters not too long after that. In, in 2016, what, what ultimately happened was Chrysler put out a customer satisfaction campaign that, um, um.
Updated the vehicle software so that the vehicles would automatically shift into park if the driver’s door is open. Well, you know, we’re, we’re glad they did that. And, and, but we don’t understand why they didn’t just do a recall to ensure that more consumers knew about that fix and got it installed in their vehicle.
And, and why NHTSA set there for seven years and allowed this to happen, um, instead of forcing them to do a recall is also a concern of ours. So, you know, I I don’t think that that NHTSA and Chrysler did a very good job there. Now, Stellantis, Hmm.
Anthony: Uh, a more serious one we have is, uh, an investigation resuming for even flow.
Uh, this is, uh, car seats that will get detached in a crash. Uh, and this is child’s car seats and I, I don’t, so this is an open, it’s listed as an open audit query. Right. And it’s the see if, if these. Car seats will comply with FMVSS number two. One three.
Michael: Yeah, so they’re, they’re basically conducting an audit to make sure that the, uh, car seats comply with the federal motor safety standard.
It appears that, that there’s a problem. They’ve, I think they’ve had about six reports of, essentially you’re in a crash and the base of the seat detaches from the actual seat itself. And so, you know, the child is completely under restraint at that point and can, you know, bounce around the car, um, and injured and be injured, um, or worse.
So they’re looking into the shell, which is that, that big, uh, plastic base that, that comes with a lot of modern car seats that allows parents to simply snap the kid in and out, um, of the vehicle without having to, you know, pull the car seat outta the car and reinstall and do all of that. So, um. You know, it’s, there’s about 400,000 of these, I think 350,000 of these car seats out there.
So they’re a fairly popular model. And if you have base failures like this, that’s incredibly serious in the, in the, in the land of car seats. So, um, and that’s an, that’s an interesting one that needs, needs to be looked at.
Anthony: Hmm. Uh, the last one I’m gonna cover is, uh, this is a recall with Toyota has issued an immediate do not drive advisory for 2003, 2004, Corolla, Corolla Matrix, and 2004, 2005 RAB four vehicles due to Takata airbags.
Now, this is surprising to me because we’ve known about the Takata problem for a long time. Why are they just discovering this now, putting this do not drive advisory for these really old cars at this point,
Michael: 20-year-old cars. So. They, they’ve known about the problem forever, obviously, and, and what they’re doing is similar to what Honda did, I believe it was last year, where they have gotten to the point where these airbags are simply dangerous.
I don’t know if there’s dangerous as the ones on the, on the, the Honda vehicles that had the same thing happen, but they basically reached a point where they’re saying, you, you should not drive these cars if you haven’t gotten this recall repair before. When NHTSA announced the Honda, um, version of this last year, they said that there was, you know, a risk that there may be as high as a 50% chance in, in a, in a collision or otherwise, that you could have this condition occur, which is incredible, a very, very high rate of failure.
Um, and it shows you just how you know, bad these inflators can get over time and with enough exposure to humidity. So basically this is. Toyota saying, okay, we’ve got this group of really susceptible airbag inflators that are enormously susceptible to rupture right now and can injure and kill you if you get in your car and drive it.
We need you to get out of it, bring it to the dealer somehow without driving it, um, and get this fixed right now. Um, so it’s good they’re doing that. Um, I think that there are probably some other manufacturers that should start doing this to shore up some of their Takata repairs. There’s, um, you know, still a long way to go, uh, in, in fixing the Takata problem that’s been around now for at least a decade.
Anthony: How does that happen? So let’s pretend I have a one of these cars and it says, do not drive. Do I have to get it towed to a dealership? And am I paying for that?
Michael: I don’t know. I imagine that would be different in every situation depending on what the, what the manufacturers set out as part of its process.
You know, we know that Honda, for instance, was being very aggressive, you know, going and knocking on doors. There are mobile repair units that are available with some manufacturers who can do this repair in your front yard. So there’s probably a lot of options there up to and beyond towing that owners could use to make sure that they’re safe at this point.
Um, you know, when you get to that point in the recall where 90% plus of the owners affected have gotten the repairs, it’s really, really, really, really hard to get those last, you know, thousand, 2000 vehicles that are still out there. Um, because there are many owners who simply don’t care and are happy to be ignorant about the matter.
And, and there are also many owners who are, you know. Out of touch or incapable of getting recall repairs done or, or even, you know, incapable of even understanding recall repairs and how that process is done and who just avoid it completely. So it’s, it’s, it’s, it’s a tough situation for, um, Toyota to be in, but you know, they, they put these inflators in their cars in the first place and they’re hopefully going to get them off the road and get these airbag inflators repaired before anyone else is injured or killed.
Fred: Yeah. Lemme jump in and just say the root cause of this problem is the absence of engineering requirements. We talked about that earlier, but the absence of engineering requirements for airbag inflators that are going to be put into vehicles, there’s a qualification process that’s well known that can make them.
Safer, much more reliable and in particular reliable at the end of their life, not at the beginning of life. Uh, requirements are the, the friend of the designers is also the friend of the public ’cause they’re intended to protect public safety. I. End of rant.
Anthony: Uh, yeah, we’ve been beating that drum I think since the first episode.
Uh, and for listeners at home, uh, we apologize for any sort of connectivity issues there. I do not think Fred was speed talking there for a second. Your internet kept going in and out and then it would collect all of your words and spit them back at us at once. I think it was understandable. Uh, let’s end with, uh, speaking of Fred, we’re gonna end with, uh, some listener feedback.
This was in, uh, regards to last week’s episode when, uh, Fred dissected the article on the Guardian about, you know, why I need an engine that can go a hundred and a billion miles per hour, a hundred and a billion miles per hour. Look, people, I got the flu back off. Anyway, listener feedback, just listening to Fred’s, response to Guardian readers.
I read that piece and was chuckling. He did a great job of explaining the e-logic of most motorheads. So look at that. We got fans. Fred, you got a fan?
Fred: That’s great. I, I hope it’s not my sister, but you know, it was not your sister. I’ll take it from whoever, whoever it came from. Thank you.
Anthony: Excellent. Hey, and with that, uh, thanks listeners.
Uh, we’ll be back again next week with more, uh, stories from, it came from a failed rear view camera. Rear view camera, rear view camera, rear view camera.
Michael: Take your medicines.
Anthony: Oh, man. Just, it’s not, I hate January. I hate January. All right. Goodbye,
Michael: listeners. Thank you. Goodbye folks. February’s gonna be here soon.
For more information, visit Www.autosafety.org.