Missy Cummings: Insights into Self-Driving Tech and Safety
This week we are joined by Missy Cummings, Director of George Mason University’s Autonomy and Robotics Center. Missy walks us through the safety concerns with self driving cars based on your research, talks about the definition of ‘safe enough’, how laughable remote operations are and lack of functional safety testing.
This episode underscores the urgent need for both state and federal regulations to improve safety standards and accountability for autonomous vehicles.
Links
- https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10778107
- https://www.washingtonpost.com/technology/2024/12/30/waymo-pedestrians-robotaxi-crosswalks/
- https://www.autosafety.org/support-us/
- https://apnews.com/article/autonomous-vehicles-safety-guidelines-voluntary-nhtsa-8fec0adfb3580eddbef92f4b32a0347d
Subscribe using your favorite podcast service:
Transcript
note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.
[00:00:00] Introduction and Welcome Back
[00:00:00] Anthony: You’re listening to their auto be a law, the center for auto safety podcast with executive director, Michael Brooks, chief engineer, Fred Perkins, and hosted by me, Anthony Cimino for over 50 years. The center for auto safety has worked to make cars safer.
[00:00:28] Anthony: Hey listeners. Welcome back. We’re going to try to remember how to do this show since we took the last two weeks off. So apologize for any mistakes that we make that are non obvious or more obvious or something. I don’t know.
[00:00:40] Special Guest: Missy Cummings
[00:00:40] Anthony: Today we have a special guest. We have Missy Cummings. She is the director of George Mason University’s Autonomy and Robotics Center.
for having me. Hey, thanks for having me. We’ve been dying to have you for quite some time. There’s that presentation video you did, I think it was with IEEE, which Fred constantly recommends. I’m actually [00:01:00] convinced his family, during the holidays, instead of watching It’s A Wonderful Life, they sit around and watch that presentation.
[00:01:06] Anthony: He’s not Yeah, they especially
[00:01:07] Fred: like the Irish part. That was good. That was a nice touch.
[00:01:10] Anthony: Good. Yeah. Every time a Waymo crashes and Angel gets its wings or something, I don’t know.
[00:01:16] Discussion on Self-Driving Car Data Analysis
[00:01:16] Anthony: But let’s start off with the this actually this IEEE article, the Identifying Research Gaps Through Self Driving Car Data Analysis.
And this just came out this past October, is that right?
[00:01:27] Missy Cummings: That’s correct.
[00:01:28] Anthony: Okay. So I’m going to jump in because there’s a couple things here that surprised me. One was that the Waymo and Uber and Lyft drivers, they have accidents at a rate six times higher than normal drivers.
[00:01:43] Missy Cummings: I’m so glad you picked up on that because I’ve been doing this research for a long time and.
Hardly anybody that just doesn’t even make anybody pause. And I’m thinking, Whoa, we really need to think about what’s happening in ride share as a first derivative of this [00:02:00] article. And I actually have done some legal work in Uber and Lyft cases, and there is a serious problem. And not surprisingly, if you’re forced to use your phone when you’re, Doing part of your daily trip.
That’s where we think this is coming from uber lyft drivers They’re not exactly the same the procedure but in general they all have to accept rides within a certain period of time or that Ride goes away so they’re just habituated to constantly be scanning that phone and I think that is A huge part of why we’re seeing this increased accident risk.
[00:02:34] Anthony: Wow, that was surprising to me. I remember there’s an intersection right outside my window. I live in northern Manhattan. And they, drivers, cab drivers regularly crash into the no turning sign. And so they made the sign massive. Once a month they would knock it down. And now it’s, I’ve never seen a street sign this big.
And it’s it’s the width of an SUV. So that was fascinating because we always see that Waymo recently with their PR push, I [00:03:00] want to say, keeps claiming that we’re safer than human drivers. And they have that Swiss reinsurance company say, yes, we’re, yes, Waymo’s safer, but are they comparing themselves to these people?
bad drivers, essentially.
[00:03:14] Waymo’s Safety Claims and Financial Pressures
[00:03:14] Missy Cummings: First of all, Waymo loves to sidestep the statistical reality, which is for them to actually be able to compare themselves to human drivers. And we generate A trillion miles or more every year, they would have to have driven 250 million or more miles to be able to be on comparison.
So it’s going to be years before we can actually make any definitive comparisons. But what’s happening is. Waymo, the company, is under intense pressure from Alphabet, the parent company, to make money. They are hemorrhaging money, and Alphabet, they have a lot of money, but they can’t pay it. [00:04:00] Keep hemorrhaging billions of dollars in the way that they are with Waymo.
So Waymo is whenever you see any of these press releases come out, you have to appreciate they’re not really for the public. They’re really to convince the higher ups at Alphabet. That they should be let alone and keep going because they’re going to make money. And so that’s why they don’t really care about what I say or what any other reach researcher says, because, it’s undeniable that they’re not safer than human drivers, but they don’t care because that’s not who they’re trying to message.
[00:04:37] Anthony: That’s amazing. The last thing I want to hit on before the Michael or Fred jump in is in this, it talks about how.
[00:04:44] Regulation and Safety Standards for Autonomous Vehicles
[00:04:44] Anthony: Autonomous vehicles can get access to do self driving in California and California says the cars have to be quote unquote safe enough. I don’t think we’ve ever seen their definition of what safe enough is.
Is, have we just missed it or what is it? [00:05:00]
[00:05:01] Missy Cummings: Yeah, in the interest of conflict of interest disclosure I do occasionally help the state of California in a paid capacity. But. I don’t think anyone knows what safe enough is. I would tell you that’s that definition is still a problem for something like the military, the Department of Defense, when it comes to autonomous systems.
We don’t really know what safe enough is because these systems are relatively new. So I think what is safe enough is a definition That is dynamic. As we’re starting to see new failure modes emerge, then we start to realize, Oh, maybe what isn’t safe is in this new area.
[00:05:43] Remote Operations and Safety Concerns
[00:05:43] Missy Cummings: And one example and it’s a big issue that’s starting to emerge right now is the role of remote operations.
So up to now, the remote operations. of self driving cars has been completely [00:06:00] unregulated. These are people in remote centers who are helping the cars drive. The cars are not autonomous. They are as far away from being truly autonomous as you could imagine, because there are teams of humans in remote places that are interpreting signs, giving the cars directions, helping when there’s any kind of problem, helping the cars figure out what to do.
And we saw this in the cruise crash, where the crews hit the pedestrian. Ran and dragged her under the car for about 20 feet is that the remote operations centers, because they’re completely unregulated, they don’t have the right technology to stop a bad situation. Once it started, the communications is spotty and with significant delays.
And and then just a couple of days ago, I’m not sure if you guys saw, but there was a Waymo circling an airport parking lot and wouldn’t let the driver out. And you could hear the remote operator in the video say, I can’t do anything about this. I can’t stop this car. In [00:07:00] robotics, the big red button to stop any robot when it’s going rogue is a standard.
You should have a big red button somewhere to stop the vehicle that you’re supervising. And so one of the things that we’re seeing is because this is completely unregulated, No one’s doing anything about it. No one, and no one’s being held accountable. It’s just a matter of time until somebody dies. In fact, the pedestrian that was drug under the car, she almost died because of the lack of a big red button.
So, We’re now we know what is safe enough. It’s not having, it’s not safe to have not have a big red button, but this, we didn’t know about this until these cases started to emerge. And so this is a good example of it’s, you can’t give safe enough, a hard definition, right at this moment, because the industry is still in its infancy, but we’re definitely gathering data every day that goes by to tell us what isn’t safe.
[00:07:59] Anthony: Fred, I know you’ve [00:08:00] got a billion questions.
[00:08:02] Fred: Oh, yeah, I’m so happy that you’ve come on. What a resource. It’s great. It’s just great.
[00:08:08] NHTSA’s Role and Testing Methods
[00:08:08] Fred: One thing I wanted to ask you is that NIST has recently released a Proposal for the A DS Equipped Vehicle Safety Transparency and Evaluation Program, and I believe you’re familiar with that.
And one of the things that they say in there is that quote, currently available testing and evaluation methods cannot conclusively determine an a DS safety. Closed quote. What are your thoughts on that statement?
[00:08:39] Missy Cummings: I have mixed feelings about that statement. I agree that NHTSA doesn’t that its current testing and evaluation methods are not effective and not enough to gather the correct kinds of data that we need.
But I disagree that there could be. NHTSA has a research in fact, they have two different research departments. One is [00:09:00] on human behavior and one is on technology. So all of these years since 2000 the early, or like the 2010 when it was clear that self driving cars were going to be a thing, NHTSA could have been doing research this entire time.
So for the last 15 years, NHTSA should have been doing research on what are the right things. Approaches for test and evaluation and they have it. And so I think that this is it’s If the Trump administration goes in and starts making cuts, I, this is one area where look, the research arm of NHTSA has not served us well because we should have been in a place right now knowing, we had 15 years to know that this was coming.
I myself have been working on Ways to think about testing, I think it’s going to be a combination of both testing and statistical models, but absolutely, for example, this remote [00:10:00] operations problem that we were just seeing. We have had, we have solid research evidence about remote operations from the more than 30 years of drone operations from the military.
So we have very good evidence about what works and what doesn’t in remote operations. And how you can test for that yet. We have no standards, no regulation, no nothing. So I think it’s inexcusable that we find ourselves in the year 2025 with all of this knowledge and evidence from remote operations of drones from truck dispatch, that we have nothing on the books for remote operations for self driving cars.
[00:10:36] Fred: That’s a great point. Going back to your paper though, you talk about. A specific parameter, a mean average precision for perception associated with pedestrians, automobiles, vulnerable road users, emergency response equipment, all those kinds of things. And that seems like a very specific parameter.
That could and should be part of. [00:11:00] The qualification process or the acceptance process for a neural network driven vehicle on public highways. It seems like it, for example, with single shot devices in the military or commercial environment, you can establish test standards. Let’s say, for example, you’ve got 3 9 reliability with 99 percent confidence, right?
You can set up those experiments and you can say that. So isn’t it time, isn’t it time that NHTSA just go ahead and say, look, we need to make these things at least as safe as single shot devices used in life critical military situations. And here’s a. Here’s an example of exactly where that could be done and how it could be done.
[00:11:51] Missy Cummings: I agree. I have written several papers that I think that for self driving cars, at a bare minimum, we could have a two stage process where [00:12:00] there is a static. Test where you just take the show an algorithm that’s on a self driving car, a series of images. And if they classify them correctly and by correctly, with a relatively high probability, then they would pass the test.
the static test to move on to the more dynamic test, which would be how do the cars see objects when they’re moving? I have been shot down at so many levels. And because what I’m proposing is effectively a form of type certification, which really models more of what the FAA does for aircraft, but also in Europe, they do.
akin more to self type certification for vehicles, which means that they have to be certified before they’re sold instead of the recall process, which is how we regulate vehicles. Now, we basically let companies put anything they want on the road. And then if they start killing people, then we start stepping in, which I think it’s just that model.[00:13:00]
The idea of self certification. For any technology that is safety critical with artificial intelligence is just simply not going to be sufficient moving forward for the future. But this requires a set of skills and a commitment to doing the right kind of research and the right kind of development. And it’s been a big problem.
I worked in NHTSA. I’ve worked for the Department of Defense. Both agencies have even admitted we just don’t have the right skill sets that we need to move forward. And we have got to figure out a way to get people with these skill sets into the government, so that the government can have people on their side that can call BS when the companies are trying to sell something dangerous.
[00:13:47] Fred: No, that’s a great point. We’ve advocated for the same thing, by the way, in our dealings with NHTSA. But before we pass on to Michael, though, there’s one other thing I wanted to discuss with you.
[00:13:58] Pedestrian Safety and AV Companies’ Responsibilities
[00:13:58] Fred: Right now, what we’re seeing with [00:14:00] Waymo and what we’re seeing with Cruise is that the AV companies are ignoring as an absolute requirement That they stop for pedestrians in a crosswalk, right?
So what we’ve seen is numerous examples of where the A. V. companies are hedging their bets and saying we need to determine whether or not the person is going to complete the crosswalk. We need to do that. And what we found is that, in fact, they are accelerating towards pedestrians in the crosswalk, which was 1 of the.
Underlying problems associated with cruise, but we’re also seeing that with Waymo and getting a lot of anecdotal reports from people saying that, kids in the crosswalk are not safe because these cars are trying to shave 1 or 2 seconds off their travel time. Apparently, it’s a better compete with Uber and Lyft.
That goes back to what you were saying about, their need to preserve capital and the capital preservation requirements that are really degrading safety. Aren’t there simple standards like this that [00:15:00] should be imposed by NHTSA to just say, you have an absolute requirement to stop for a crosswalk.
For example, there’s certain traffic laws where you have an absolute requirement to conform and you’re not allowed to. Interpret those in ways that are beneficial to the company.
[00:15:17] Missy Cummings: Yeah. So this is a complicated area and I think we need to understand that for what you’re talking about.
NHTSA’s authority really only applies to regulating the technology. The states actually then regulate whether or not the cars comply with rules and regulations. And it’s not as simple as saying NHTSA needs to step in. I think, as, at a first pass, states need to pass laws that they can fine these vehicles That they can cite them for the longest time.
[00:16:00] The state of California was not allowed to cite any infraction, any rule infraction from the vehicles. They’re changing that now, but for years. The cars would break the laws and nothing could be done. A big fan of incentives, including negative incentives, right? Because I do human behavior and technology, that’s what my research is, and so you’re not gonna, if the companies are gonna be given a free pass, they’re not gonna change.
Now, what states should do, and Texas, listen up, because you’re the next big part state that’s going to have all these problems. You need to set up a camera. at these intersections because they do these nearly running over pedestrians predictably at the same intersections. Set up a camera, get the license plate, Give a 5, 000 fine every time a pedestrian is almost run over.
You have to make sure your laws reflect that. And then as those fines build up, then the companies will stop that behavior. [00:17:00] So I think it’s critical that the state and local region legislatures are allowed to adapt to disincentivize this negative behavior at the same time. Nitsa. needs to start doing more research to look at the pedestrian detection problem.
I’ve personally taken cars out and seeing how well they did at pedestrian detection. And it’s not pretty. And IAHS, the Insurance Institute for Highway Safety Consumer Reports, we all find the same thing, that the systems are highly variable. Very bad at night and it’s just a matter of time until someone is killed someone there will be a pedestrian You know my prediction is within the year A pedestrian will be struck and killed in a crosswalk maybe by waymo.
Maybe by somebody else [00:18:00] doing testing maybe by a tesla But when that happens there will be a public outcry about We’ve known that this was a problem. We’ve known that this was going to happen, but yet nobody did anything about it.
[00:18:14] Anthony: I’m putting my money on it happens with a Tesla, and that Tesla says it’s the person’s fault for walking.
What are they doing? They could have taken a cab to cross the street.
[00:18:24] Missy Cummings: But this kind of goes into the Yeah, I think you’re highly Yes.
[00:18:27] Anthony: I think this kind of goes into what we’re talking about with safe enough the lack of Penalties. I had a driver’s license in California and essentially California at that point is saying you are safe enough to drive.
And if I wasn’t, they would penalize me immediately. They would, they could potentially arrest me and jail me for some of these behaviors that we see. And you’re talking about with California now start of putting in, but now it’s like they can cite the companies. They’re not really fining them yet.
They can say, oh, we can give you a citation. You shouldn’t have driven like you were [00:19:00] drunk. What does it take for them to actually treat these cars the same way they would treat a human?
[00:19:05] Missy Cummings: I, I think it’s coming. Inevitably, what happens in any city or region that, that is having these issues, that they were convinced by the company that this was going to bring jobs, it’s innovation, it’s going to help your economic climate, right?
And Then the companies get all excited or the cities get all excited. Originally that was the case in San Francisco, then Austin. And then the reality hits over time is that now there’s congestion. People are dying because ambulances can’t get through the congestion. Kamala Harris, for example, that was a very dangerous situation when her motorcade was blocked by a self driving vehicle.
And, the reality is not. What big tech sold them big surprise, right? That’s we’re seeing that in spades everywhere but then companies are our cities are caught [00:20:00] flat footed because they Tried to make proactive legislation to help the companies and then they find out oops Now we’ve got to go back and start doing it developing a fine structure, for example, these crosswalks in San Francisco.
There have been many documented cases of near misses with pedestrians in San Francisco. So the city, I’ve talked with several of their executives, they are scrambling, try to figure out what to do. And from a legislature perspective, try to get the right rules, regulations, laws, if needed. through, but these things take time and often, way more time than it takes to do a new software upgrade for whatever the tech is.
And then the company say, Oh we fixed it. See, our new software upgrade has fixed it. So you need to give us another chance. And then we’re caught in that same loop. So I suspect that. Again, once that, it’s sad that this is where we are in this country. Once the first pedestrian is killed [00:21:00] in an obvious way, then, and Tesla is not able to say that person is at fault for walking.
Once that happens, then I think because the Trump administration is moving in and they’re, they’ve said we’re done with regulation. We’re just basically going to rescind all this regulation. It’s going to fall to the states to then either pick up the regulation or it will then kick over to tort law.
And that is actually where I think that you will start to see big movement in the next four years since, from a federal perspective, there’s likely not a lot going to happen. But when lawsuits start happening with big price tags, but also what happens in these lawsuits is Discovery happens, and when discovery happens, the companies have to turn over all kinds of video and data that then will get made public, [00:22:00] become public, part of the public record, and when the American public sees what is happening, for example, with Tesla behind the wheel, or behind the scenes, happening with Waymo behind the scenes, when these lawsuits start happening, and the data is made public, it’s a big thing.
Then you will probably see a shift in attitudes, both from the American public in a bad way, and then you’ll start to see response from the companies.
[00:22:27] Michael: Yeah.
[00:22:27] Legal and Regulatory Challenges
[00:22:27] Michael: We often get the question, what’s stopping driverless car manufacturers from deploying all over America. And, they, you keep hearing, seeing stories that, the feds need to create a pathway with regulations, blah, blah, blah.
But the fact is. It’s ultimately, I think, the legal system and the threat of lawsuits that’s preventing these companies from deploying in areas where these vehicles clearly aren’t ready yet.
[00:22:54] Missy Cummings: Yeah, it’s a it’s my favorite statement from Elon Musk that he, it’s [00:23:00] all the regulators fault that his car, self driving cars are not on the road.
And, it’s, people just don’t understand. He could have put his cars. He could have sold them as self driving cars now. There’s nothing holding him back. There is no federal regulation stopping your Tesla from being a self driving car. What’s stopping your Tesla from being a self driving car is that it doesn’t work and it never will.
[00:23:25] Michael: Yeah. So I’ve got some, a question regarding the data, looking at the article and you had to collect data from a variety of sources. You’ve got the California public utilities commission, the California DMV. You’ve got NHTSA with the standing general order. You’ve got the NHTSA fatality analysis system, FARS data.
You’ve got to go to the FHWA to get miles traveled and things like that. And we’re, and we’re seeing now that there is a possibly, and we expected risk rescission of the standing general order that’s [00:24:00] going to happen in the Trump administration. How important and how good is the data that you can access now on this subject?
It seems like you’re really having to piece together a lot of different databases to come to conclusions and how much is that threatened by NHTSA potentially Taking away the good data that they’re getting from the standing general order.
[00:24:24] Missy Cummings: To be fair to NHTSA, I don’t think it’s not going to be their choice to take away the standing general order when the Trump administration comes in.
It, that will be a decision made and it’s already in the project 2025 playbook. So this is definitely something that, and I think we need to be fair to Donald Trump. He’d never wanted this. This is co president Elon Musk’s wish. Co president Elon Musk wants both any regulation involving self driving cars to go away, but he also is not happy with the FAA.[00:25:00]
Because they are putting restrictions that affect SpaceX. So when the Trump administration takes over, I expect massive cuts and changes inside of all the Department of Transportation, not just NHTSA. NHTSA’s, the Standing General Order, it is, it will be very sad to see that go away, for many reasons. We are the only country in the world with this data set.
We are the only country in the world that requires this kind of reporting. And because of the reporting that’s come in. When I was there, that was my job was to look at all the standing general order accidents. We, for example, found the first evidence of phantom braking in self driving cars, not just in Teslas, but in self driving cars.
And that actually has led me as a researcher now to go down that path to find out [00:26:00] What is going on with computer vision that’s contributing so much to these phantom breaking events. So that was critical. And there are many more similar emerging lessons learned that NHTSA has access to every day that they gather data.
So we are finding out just exactly what the failure modes are through that data. When that goes away. It’s not that we will lose that data altogether, but then it’s. As you saw in my paper, it was, there was a lot of matching of data sets to bring it together to make it happen. That will, it will still be possible, it will just be much more difficult to bring it together.
Really because, and California should be lauded in this way. They’re doing a great job by still collecting a lot of data. Both from DMV, where the testing happens, but from the California Public Utilities Commission, who [00:27:00] oversees the commercial operations. So I think it’s going to be critical if the standing general order goes away, and we think it’s going to, that all states with self driving car operations are going to have to pick up the slack.
I think California will continue to do I don’t think Texas will. And so it’s very likely that Texas will see some of the first fatalities, some of the big they will experience all the problems that San Francisco did because they will not have learned. And any lessons, which of course is going to be as a former Texan, I can tell you, people are not going to be happy with the kinds of if you were to roll Tesla’s out at scale, tomorrow is self driving.
I’ll tell you, the first thing that’s going to happen is I’m going to put my daughter through college on all the expert witness fees from all the deaths that wouldn’t happen. But I think that, Texas is going to, Texas, Florida, they’re very pro Trump, which is to say [00:28:00] anti regulation.
I think that those states are going to find out very quickly just, they’re going to have a lesson learned about why you want regulation around safety critical systems.
[00:28:13] Anthony: Wasn’t it in Texas where the crews origin their car without the steering wheel? They didn’t drive into a building in Austin.
Was that Austin? Yeah, I think that’s right.
[00:28:21] Missy Cummings: I think that’s right. I think that’s right.
[00:28:22] The Uncertainty of Computer Vision
[00:28:22] Missy Cummings: And exactly these problems that I talk about in this paper that you’re referring to, I think the big walk away for everyone should be We absolutely do not know what we’re doing with computer vision.
I was shocked. I, when I went down this path, now I’ve been doing, I’ve been in robotics my whole professional life as an academic. I really thought we had a better grip on artificial intelligence. It’s making classification predictions. And the more I get into it, just the more horrified I am [00:29:00] that the computer science community has made a set of core assumptions about how these technologies work, and no one has ever tested these assumptions.
Indeed, I have a student who’s getting ready to produce some papers that are going to show you just how brittle these technologies are. And we just, we being academics, we were very, there’s a lot of hubris in there. We just felt like we knew what we were doing and we didn’t feel like we had to do a lot of testing only to find out that, We probably let the horse get out of the barn before the horse was really ready, especially for safety critical systems.
[00:29:41] Safety Concerns in Autonomous Vehicles
[00:29:41] Missy Cummings: I don’t care if your face recognition doesn’t work on your iPhone, that’s not going to kill anybody. But the reality is that computer vision systems, especially at high speeds highway speeds, for example they just simply are having significant trouble working. [00:30:00] at the time scales that they need to reliably working and that will kill someone.
[00:30:07] Michael: Yeah, this is something we, it’s something we’ve seen, in non AVs as well, the automatic emergency braking, for instance, tends to rely on cameras and maybe ultrasonic sensors and some other types of sensors. And we’ve continued to see, I know you in your paper, you divide it all the categories of the primary causes of the self driving crashes and, the number one.
By far, is vehicles, the AVs being struck from behind which suggests that they’re stopping too fast, probably stopping at things that aren’t even there. We often say that driverless cars. That’s right. There’s a constant refrain from the industry that, driverless cars don’t drive drunk or distracted.
[00:30:50] The Reality of Self-Driving Car Crashes
[00:30:50] Michael: And then, we’re constantly saying that Waymo looks like it’s driving drunk, it’s on the wrong side of the road, but now even beyond that, we don’t even have to. Say that a, these don’t drive doctors [00:31:00] anymore. We don’t have to get on that path anymore because in the computer vision, the autonomous vehicles are truly hallucinating.
They’re seeing things that aren’t there. So is that worse than drunk?
[00:31:15] Missy Cummings: Yeah. I don’t want to ever get. try to make light of drunk driving because it is still an increasing problem, sadly. But the idea that somehow a car that will just make up an object that is not there, it was somehow, not a bad idea, bad thing, it just, it amazes me that more people are not concerned about this.
And we have all the evidence that we need from the Bay Bridge tunnel crash of a Tesla that did a phantom braking problem with eight other cars piling behind it. Again when people die because of this, I do wonder how the engineers at Tesla, how people who work for [00:32:00] the government.
We’ve known that these problems exist for a long time now, and yet no one is doing anything about it.
[00:32:07] The British Approach to Self-Driving Cars
[00:32:07] Missy Cummings: I’ll tell you that the other kind of crazy new entrant into What were they thinking category are the Brits. The Brits just woke up the other day and decided they must have self driving cars because they’re ready to deploy.
And I’m just amazed. Like usually the Brits are pretty circumspect and don’t go head first into a bad idea. But And by the way, I want to be clear. Self driving cars, I think, slow speeds, limited areas, great idea as long as you have really good remote operation support with a big red button to stop these things when they’re going crazy.
But at highway speeds, we are nowhere near solving this problem and people are going to die. And it’s interesting to me, we have, we are now moving into a realm of chaos. Chaos is really going [00:33:00] to drive it. pretty much everything that happens for us for the next four years. And what is the American public going to do when, or the British, when people start dying at a high rate from bad technology, it’s only going to add to the chaos.
[00:33:17] The Business Case for Autonomous Vehicles
[00:33:17] Fred: I think that the companies developing these will not allow people to recognize that it’s happening at a high rate. They will say yeah, it happened once or twice, but that’s, that’s just the price of doing business because so long there’s been this drumbeat of the inherent safety of removing the human driver and these stupendous and yet unsupportable claims of 94 percent safety improvement.
And, we’ve all heard that. But I think the public can be easily misled by the public relations departments. In the OEM saying yeah only one or two people have died, but hey, that’s not so bad. Tesla’s already gone down this road. There’s lots and lots of people who’ve been killed by [00:34:00] Tesla as a consequence of their using full self driving or the autopilot.
But, it flies under the radar. Tesla just says that’s the price of doing business. We’re constantly improving. Why is this a problem? And people buy it. I don’t understand why people do buy it, but I think it goes back to this whole idea that it’s new. We need to do this. This is a critical need that the industry’s got.
We need to do this for safety and, a few bumps in the road on the way to this beautiful, safe Nirvana. It’s just the price of doing business. Anthony has talked a lot about that and in fact, how the business case really is unsupportable for these things versus Lyft and Uber and other human driven vehicles.
Even with their relatively high accident rate or a collision rate that, that you’ve documented. I don’t know if you want to speculate on that, but I’ve become more and more of a Marxist as time goes on, [00:35:00] thinking of how, capital is really driving and the needs of capital are driving people towards unsafe conclusions associated with this new technology.
[00:35:10] Anthony: Aren’t you supposed to become more conservative the older you get? Isn’t that what the thing is? Oh yeah, I’m a conservative
[00:35:14] Fred: Marxist now.
[00:35:16] Anthony: Ah, okay, good, that explains the that explains the vest you’re wearing. LLV and their big Trump supporters. Oh no. So I think we’ve seen examples where self driving vehicles, autonomous vehicles make sense in that low speed, like an airport shuttle system, where it’s a restricted, highly defined area where basically you’re geofencing the system.
But as you point out in your paper a lot of crashes, the backups happen because these systems are not programmed to deal with human behavior, normal human behavior. Trucks that say, hey, I’m backing up. So every car behind goes, oh yeah, I got to give this guy some room. And these are just very basic things that, a 16 year old on a learner permit goes, oh, I know what to do.
Or if they [00:36:00] don’t know what to do, there’s an adult sitting next to them and says, hey, put it in reverse. We’re going to go backwards a little bit. Whereas these things do it. I have the same. A feeling you did. I studied AI back in the 90s and when I started this podcast with these guys, I was like, self driving cars?
Come on, we’ve got this figured out. And then you realize computer vision is no better than what the Navy was doing with sonar in the 80s. And you’re like, oh, what? How has this not gotten better? And I remember, oh yeah, it was this really hard problem. A smaller example we can even just see of this stuff not working is the problem that Tesla’s having with its smart summon.
Where I guess Nitz is now opening an investigation into this. And for listeners, Smart Summon is a dumb idea for lazy people who are too busy to lack the ability to walk to their car in a parking lot. So what you do, and you’re your tech bro, and you’re like, hey, you’re hanging out with your other tech bro, and you’re on a sidewalk, and you’re like, look, I’m gonna bring my Tesla from 20 feet away, and it’s gonna drive over in the parking lot over here.
And your tech bro friend will be like, why don’t we walk? And you’re like, no, we don’t [00:37:00] do that. And your Tesla backs out and smacks into a couple cars, or you can’t see where it just ran over something, because line of sight is a problem. That’s a problem. That’s why I think if they can’t even get something like that I don’t understand.
I’m with Fred, not a Marxist, why is something, how is a feature like that allowed to get out on the road where it can prove over and over again that it doesn’t work more often than it does?
[00:37:25] Missy Cummings: For Smart Summon specifically, it’s allowed because it’s not a driving feature. And if you’ll notice in all the legal documentation, Tesla says you should only use it on private roads.
So they will say, the lawyers are going to say this was never meant for public roads. And so if people use this on public roads, then it’s not our fault. It’s that dumb driver’s fault.
[00:37:49] The Legal and Regulatory Landscape
[00:37:49] Missy Cummings: Indeed, I do, I increasingly do more and more legal work. Tesla, I actually help Tesla drivers who are being prosecuted for [00:38:00] whatever problems that have happened.
Tesla is very happy to turn over. And we saw this with the recent bombing at the Trump hotel in Las Vegas. The instant that a crash happens, The Tesla turns over all of your private data to the police almost immediately. They are extremely excited to work with the police to get them all that data to show them what a dumb driver you are so that you get prosecuted.
And I think that’s, it’s one of those kind of hidden gotchas. To all the tech bros out there that love their Teslas. Oh boy, Tesla’s ready to sell you out in a New York minute. That we want to talk about loyalty. That loyalty doesn’t flow the other way. But this legal system too, and this is why Fred, you shouldn’t get don’t get depressed.
I’m, even though I work a lot in AI safety and it’s, I always call myself the Debbie Downer of AI. I. I’m actually a very optimistic person, [00:39:00] but I just happen to work in a field now where it’s just being, it’s, Patently clear to people who are experts oh my God, we’ve really made a few mistakes here, even in light of reduced regulation, which, it’s funny, politically, if I were to actually say I’m not a member of any party, but if I were probably be libertarian, because I’m really not a fan of regulation, unless, We absolutely have a need.
This is one of those cases in self driving cars particularly for the remote operations. We have a huge need and it’s not being met. And so that part of the feedback loop of what happens when bad tech gets deployed, because the government is not going to step in and do administrative law to fix that problem in the way that America works, then the self correction mechanism are the courts.
And have faith, Fred. The courts will get there. It just takes a lot of time. [00:40:00] But I promise you, when a Tesla case and or a Waymo case finally go to trial, and all the data becomes public that has, is being uncovered through discovery, indeed, it, we may not get there because The companies are likely to settle so much because they don’t want their secrets coming out.
But eventually a court case will get out where this data will be made public. And I do think that when a jury Sees what is not happening behind the scenes. Meaning just how bad sloppy engineering is and what testing is not happening and what companies are doing to cut corners. It’ll be a bad day for for those companies.
So I do, and maybe you think, Oh, that’s too Pollyannish that you think the courts will actually. I’m not saying they’ll fix it, but there, there are other mechanisms for us. to [00:41:00] have the truth come out, even if something like the standing general order goes away.
[00:41:05] Fred: I do admire your optimism. That’s always a good thing.
There is one sleeper issue. I think that is going on that does not get any attention. I’ll just bring it up. Now. We took a run at the Massachusetts state requirements for safety inspection last year to see what would have to be changed in order to accommodate. Self driving vehicles, and in particular, what defines a self driving vehicle is that it’s driven by software and many safety critical functions over to software.
None of these are visible in a safety inspection. None of them can be perceived by human senses. There’s no mechanism anywhere that I’ve seen in any inspection procedure for anyone to validate or even verify that the safety critical functions embedded in the software are performing properly, that the [00:42:00] perimeters are within bounds that it is in fact safe to operate because the safety critical functions are in good shape.
I didn’t see anything about that in your research gaps, and maybe it’s not appropriate for that, but it seems to me that. Some states use the approach Texas did, which was to just remove any requirement for a safety inspection for the cars and, but states that do have a safety inspection requirement, I think should somehow accommodate those safety critical functions that are embedded in software, particularly for heavy trucks that may be used over the highway.
Have you had any thoughts on that?
[00:42:39] Missy Cummings: Yeah, it’s a complicated set of issues. And I appreciate why you’re struggling with this, right? Because as we think about inspections, when you get your car inspected you have an annual inspection for a passenger car, you have different inspections for commercial trucks, for example.
And those happen like on an annual basis. And, we’ve got [00:43:00] very clear because they’re physical systems. We’ve got very clear standards. Yes. No, your oil levels are here. They’re, so it becomes much easier to do. Functional safety testing, which is a big part of software engineering. So that need for inspection as it were that then shifts To the point of production and the point of pre deployment to all vehicles, right?
Because software gets pushed to all vehicles, not just Tesla, other companies are coming up with over the air updates. And I do believe that will be the industry standard going forward as more and more car companies become software proficient, which by the way, is, has not happened yet. Their car companies are still struggling to become software proficient.
That would then. from a statutory authority perspective [00:44:00] belong to NHTSA. So then NHTSA would be the ones to do some kind of functional inspection slash testing. Not necessarily that they would do it, but they would be the ones responsible for instituting the rules to make that happen. And there have been discussions about having third parties do that.
So NHTSA has been around as, and you’ve seen this in other legislative forums, like that we would have a third party test, do some of the testing of the software and give a company a stamp of approval that they could move ahead with some kind of deployment. So people are talking about it, but it hasn’t happened.
And because NHTSA for so many years has been a self certification regulatory authority, meaning you just say that whether or not your systems are good enough, it’s going to be hard to change that problem. And so I, I would tell you that the [00:45:00] software inspection issue is inextricably tied to our self certification process.
And if we can’t ever move beyond self certification, then we will never fix stuff. the inspection problem, the code inspection problem.
[00:45:17] Anthony: Michael, what are the odds of us moving away from self certification?
[00:45:21] Michael: The next few years very low. I could see a system where, you know we left in place the traditional self certification methods for, Level two and lower vehicles, maybe level one and lower vehicles, but then added a type certification regime for the higher, more, more autonomous vehicles as they come out.
I, that could happen, but it’s not going to happen in the Trump administration.
[00:45:50] Anthony: Bit of a cynic, I see.
[00:45:52] Challenges in Remote Operations
[00:45:52] Anthony: I want to jump back to what we were talking about before with remote operations. Cause this was interesting. We were talking about, so Waymo, you’re out there, and [00:46:00] are the remote operators paying attention the entire time?
Or only when there’s an issue? Or do we not know?
[00:46:08] Missy Cummings: I can’t say for sure. I have been inside an old remote operation center of Waymo’s in Phoenix. But I have, I’ve spent my whole professional life doing remote operations of some sort for all different kinds of vehicles, including driverless dump trucks in Australia, military drones, always.
When something is going on, when there’s a problem, the remote operators are overworked. They just have so much going on, it’s very difficult for them. But for the most of the time, if things are, steady state, operations are happening as predicted, you can bet the operators are on their phones.
It’s just a fact of the matter that in any kind of boring, what we call supervisory control environment where humans are supervising some set of robots. The [00:47:00] human brain is terrible at what we call this vigilance task. We don’t like just to watch. We don’t pay attention. Even if we don’t pick up your phone, your brain is going to wander.
We call it mind wandering. So this is why Remote operations is so tricky because you have to give people enough to do to make them stay cognitively engaged. But if it’s too much, then they start dropping tasks. On both ends of the spectrum, if you’re overworked, you’ll make mistakes. But if you’re too bored, you’re going to make mistakes.
And yes. This is why remote operations is harder than it sounds because there’s a reliance on a human who may not be paying attention.
[00:47:49] Anthony: I think that’s just another strike against the business case for this. It doesn’t sound like they’re saving much money. Like I can see if you’re, so if you’re overseeing like a a dump truck at a Rio Tinto [00:48:00] mine in Australia, like they’re, if it goes off course a little bit.
Probably no one’s going to get injured or die. Whereas and the military, I imagine they’re using a closed network for drone operations. They’re not going over the public internet. So they their latency then is just going to be distance, right? Whereas these guys, if they’re going over the public internet, like they’re, Warcraft or something like that, their latency just went up, 50 milliseconds, which is a lot.
[00:48:30] Missy Cummings: I’m so glad to hear you say that because you’re absolutely right. And I think that, and I’m always at war with some professional society. In fact, some teleoperations group is just outraged. Outrage. I say that I dare say that that teleoperation is. It’s remote driving is just a non starter and it’s because you are right that the typical approach for a self driving car company for remote operations is to use three different cell networks.[00:49:00]
And so they’re going to Cell networks.
[00:49:02] Anthony: That’s right.
[00:49:03] Missy Cummings: Yes. Yes. Yes. I know. I know. It’s so great to hear you laugh because I’m like, I said, unless you were actually under in this world, you appreciate like one cell network. is bad. So if you use three, it’s just three times the badness. You’re not guaranteed that’s not going to improve your latencies.
And I’ve seen some research and been part of some projects where I’ve seen the latency from these three cell networks, and it’s on the order of a half a second. And you have to remember you, these systems are not tolerant to delays truly more than I would say at the very outset, milliseconds, but it would really need to be More along this line to 10 milliseconds, and there’s no network around three, you could have 100 cell networks and you’re still [00:50:00] not going to be able to get a maximum ever delay of 30 milliseconds.
So we just really have to rethink this. But of course, rethinking that cost money.
[00:50:10] Anthony: Yeah, depending on where I am in my own apartment my, my cell phone coverage drops or changes. I totally forgot that they’re connected over cell phones. And we saw that with, what, Bruze a couple years ago, where they all stopped working in San Francisco, because a lot of people were using cell networks.
Oh my God. Everything will be better in the future, Fred. Everything will be better.
[00:50:30] Fred: I’m waiting for it. Impatiently.
[00:50:34] Anthony: I can tell.
[00:50:35] Concluding Thoughts and Future Outlook
[00:50:35] Anthony: I think we’ve taken about an hour of your time at this point. You guys have any more questions before we wrap up?
[00:50:43] Fred: Do we have more questions? Oh, sure, we do. We do. We’ll scratch on for hours
[00:50:46] Anthony: more,
[00:50:47] Fred: but I’ve got a whole list here.
But come back on for another couple of weeks, if you would, and we’ll get through our list of questions.
[00:50:57] Missy Cummings: I, if you have any more burning questions, [00:51:00] happy to answer them. Else, you could always We could Revisit this annually and talk about what didn’t happen in the
[00:51:08] Anthony: past. Most definitely.
So we have a link, we’re going to put a link up to your IEEE article. This is a very readable article. I know I’ve come across some IEEE ones where I’m like I need a dictionary and a, and some bourbon to get through this. This one is not that at all. I had to zoom in the type, but that’s just, age.
Other than that Oh, for your
[00:51:27] Fred: information, we’re working hard to bring Anthony along. And we do have some signs of progress, but it’s still a work in progress.
[00:51:34] Anthony: In terms of what? Improving my vision? What’s happening here?
[00:51:38] Fred: We don’t need the details.
[00:51:40] Anthony: Oh, God, no.
[00:51:41] Missy Cummings: It’s a compliment to say, for you to say it’s readable.
Thank you so much. I work hard to be an edutainer. I want to both educate but keep people’s interest. And if we use big words, 5 words that, that nobody appreciates, then nobody reads it.
[00:51:55] Anthony: And this was a great article. I really enjoyed it. I’ll have a link to that. So again, [00:52:00] a big thank you to Missy Cummings, director at George Mason University Autonomy and Robotics Center.
[00:52:05] Fred: Thank you so much. Pleasure to meet you.
[00:52:08] Anthony: Thanks. Until next week. Okay. Bye everybody.
[00:52:13] Michael: For more information, visit www. autosafety. org.