Edge Case Research

This week we are joined by Mike Wagner and Ben Lewis from Edge Case Research.

They are working on making AV’s safer and providing insurance to AV companies.

Subscribe using your favorite podcast service:

Transcript

note: this is a machine generated transcript and may not be completely accurate. This is provided for convience and should not be used for attribution.

Anthony: Welcome to another episode of the Center Auto Safety Podcast.

And this week we’ve got special guests, Mike Wagner, c e o of Edge Case Research, and Ben Lewis, VP of Edge Case Research. And so briefly, my understanding of what you guys are doing is working to ensure automa automated AV vehicles. I can never get any of these acronyms correct but so self-driving cars, you guys are

Mike Wagner: putting insurance on this.

Yeah. Thanks guys for having us on the show. Really appreciate the opportunity. Yeah, so Edge Case is a startup company headquartered in Pittsburgh. We got folks throughout the globe. We got an office in Munich and folks like Ben throughout the country. We started the company to try to make autonomous vehicle technology safer.

I come outta Carnegie Mellon University. Had helped create some early versions of this technology and got really interested in how we could make it safe and spent some time researching some different technologies that we thought were pretty important for doing that. But then we realized that the, the technical challenge was interesting, but not the whole puzzle.

There’s a lot of business challenges, legal challenges cultural challenges. , we set out on a journey to try to help autonomous vehicle developers build systems that are safer. And so we took a couple of steps to do that. The first one is we we helped publish the UL 4,600 standard, which was the world’s first.

Standard for the assessment of the safety of level four vehicles. So that was about two and a half years ago now that standard came out, we then started building technology to help customers conform with that standard. So to live up to the standard and to be able to achieve all the different things that, that’s that standard’s asking for, cuz it’s.

It’s 300 pages of pretty in-depth requirements. And it, that is hard, right? That is a long journey that a lot of our customers are taking. In 2019 we raised a round of in venture investment and one of our investors was Liberty Mutual Strategic Ventures. And so we started learning more and more about what insurance might do with some of the safety assessment technology.

We were. and we realized that we could really help advance our mission if we aligned insurers with true safety principles from UL 4,600. And we realized that this was a great way to maybe incentivize the industry to make the investments in safety engineering. That we wanna see them make.

And yeah. Now we’re getting into the insurance business as a robotics guy from C M U I’m surprised to say that I’m becoming an insurance salesman. But that is what’s going on. And I think it’s really important as we talk to the market everybody, both from the operator side, the developer side, the insurer side says that this is a huge need.

So we’re just really glad to be bringing UL 4,600 principles to the.

Anthony: Level four cars. This is the Waymo’s and the GM cruises

Mike Wagner: that we’ve talked about. That’s right. Ben, do you want to cover who some of our, what some of our customers look like and what the space looks like?

Fred: Wait, hey, let me interrupt. This is for some of our listeners, you might want to tell them what UL 4,600 is. Oh.

Mike Wagner: Yeah, of course. Yeah. UL 4,600, like I said, is the world’s first safety standard that defines how to assess the safety of a level four vehicle. So a fully autonomous vehicle.

You can go online and look at a a digital version yourself. It’s 300 pages of requirements for if you’re going through and looking. A particular autonomy technology. What do you need to look for? And it’s all based on what’s called a safety case, which is a structured argument backed by evidence.

That argues that a an autonomous system is safer and intended use. So it covers a variety of topics from how you’re looking at different risks and hazards to how you’re testing the machine learning itself how you’re defining the operational design domain. It’s really quite a, I think, an.

Important milestone in the autonomy industry to have that published and it’s being maintained actively today. And it’s not just edge case. There are folks from the automotive industry, the insurance industry, public sector all involved in the generation of the standard.

Fred: And is this the same UL that puts the little stickers on my air Condit?

Mike Wagner: the one and only Underwriters laboratories, which interestingly enough, in the name they were created to try to come up with standards to assess the safety of all different kinds of things. So while autonomy is different in some interesting ways it’s the same, it’s the same general idea.

And yep. That ul All right. Thanks. .

Michael: And also, yeah I had a the safety case idea is something I think is really important for listeners to understand here is not like a, it’s not really a, something you might think of. in our experience in the past, a lot of safety cases are made. On paper they’re documents submitted to Nitzer or some other regulator.

In this case, it is an active and ongoing case that and has to be to account for new and unforeseen risks or problems that might arise in the operating

Mike Wagner: environment. . Yeah, I, you said it perfectly. Safety cases are used in a number of safety critical industries today. Aviation nuclear rail, whatnot.

And members of our team have experience applying them in those sectors. But like you say, typically they’re. They’re a work product that is generated when everything is done. And I think with a novel technology, there’s so many uncertainties. We’re called edge case research for a reason, right?

There’s so many uncertainties that your safety argument needs to. Needs to explain how you’re going to be monitoring for those surprises when you start to deploy and how you’re gonna deploy responsibly, right? So keeping track of everything. Fixing problems that you find in a proactive way and scaling up that way.

So yeah, when we talk about a safety case, it’s not just a document. It is a document for sure. There are process, deliverables that, that need to be presented as evidence. But you also need the data. You also need the safety performance indicator. . So how does this work? How do

Anthony: I get insurance for an av like as a, as an individual I had to take a written exam.

I had to take a practical exam, and then as a 16 year old, they’re like, this is how much it’s gonna cost. And then magically, when I turn 25, they’re like make it less. Who gets the insurance for an av? Is it a the car? Is it Waymo? Is it me if I buy

Mike Wagner: an av? , how does that work? I will let Ben demystify the whole process for you.

Ben Lewis: Excellent. Yeah, sure. We talked earlier about

Mike Wagner: the

Ben Lewis: level four autonomy that we’re going after, and for us the primary focus is on a insurance in a b2b model. So we are, we’re looking to offer insurance to the entities that develop and deploy these level four vehicles out front of the.

Which is really predominantly how we see level four autonomy getting out into the world. And for the insurance, you’re talking about creating commercial lines, insurance products that companies like like a wema, like you mentioned earlier on the developer side would purchase.

Or the fleets that are going to be a dock in these vehicles were purchased. Typically, the key insurance coverage there is auto insurance. The same flavor of what we all buy as individuals. And so for us, the, we’re really looking to take that safety case framework as our risk assessment framework.

And so instead of saying, I don’t know the. The, I guess the co CEOs of Waymo will take a written exam and we’ll put them out on the road, or we’ll put the Waymo driver out on the road for some practical exam. And then when the Waymo driver turns 25, I guess we’ll give it a a little bit of a benefit for its driving experience.

No, we’d want to look at a company’s performance and their risk level in relation to a safety case and the associated requirements. And say, how do they stack up in sort of these key areas that we look at across the safety case? And therefore, we can not only say something about the level four, autonomous driver and the technology, but we can also say things about the company and their organization and their safety culture.

We can say things. The company’s operational performance. Not only are they good at producing technology, but they’re also good at putting it out on the road and actually operationalizing it in a safe way. And so we’re looking to take those elements and, essentially replace the traditional methods that we use to look at human drivers.

And then in some cases, we’ll keep some things that insurance uses today. Looking at sort of the companies. Financial health or, maybe the organizational structure they use, which might relate to the safety culture, but, might also tell us a few things about how they manage their risks overall.

So we’re essentially taking, some of the existing methods and we’re retaining them. And then in some cases, particularly where we’re talking about the risk the core risk of driving a vehicle, we’re stripping out, the things that are really just particular to. And replacing those with elements that are really particular to, to software drivers and to technology.

Anthony: this

Mike Wagner: is our favorite

Anthony: part of the show. Or Michael forgets on mute himself. every

Mike Wagner: time on board.

Michael: Every time I did, I think I made it through one episode last week. That was my first one of the things, I’m fascinated by here is, right now the way the. Vehicle industry works is manufacturers basically self-certify that their vehicle is going to be safe matching up with kind of the minimum standards set by the government and the federal Motor Vehicle Safety standards.

And when it comes to, new technologies like automated driving systems and even, things. Automatic emergency braking and a D A S systems crash avoidance systems, those minimum standards don’t really exist yet at a federal level. And so there’s really no minimum performance standards.

We see, automatic emergency braking, working some circumstances not in others, working good for pedestrians, for some manufacturers, not so great for others, depending on the condition. And , it seems like it, it that in the absence of federal regulation, when manufacturers are having to go out and get insurance, but also to get that insurance, they’re having to basically jump through some hoops by, by means of a safety case software set up.

It’s, it creates a synergy there because the underwriters want that safety case software to. as good as possible and as thorough as possible to ensure that they’re not losing money in the deal. So it’s a safety synergy, I think is the word I’d use for it that’s created there that might help fill some of the gaps we see in regulation at the federal level and state level.

Mike Wagner: Yeah I think you put it really well. That’s that’s been one of the things that we’ve been. I guess challenged with over the, almost decade that I’ve been running Edge case is, we have a whole nother business unit that we’re probably not gonna spend too much time on today.

That’s in the world of defense and in the defense world, every new system that gets generated needs a safety assessment done, according to according to government regulation. And so there we have a very clear. Entry point where we can, provide the assessment and it’s, it’s not just a hoop to jump through, but it’s really feedback to try to improve the system and make it better for everyone.

It’s always been really challenging in the commercial space, and that’s one of the reasons why I’m excited about what Ben’s team is doing at Edge Case as well. To be clear I think all these regulations need to be figured out as well. And we’ve been quite open with.

With those stakeholders at state and federal levels as well as international about how we are planning on doing our assessments. We’ve been talking about a safety case. The details of a safety case, are going to have some there’s gonna be some tailoring to a particular customer’s technology, but the top level like approach that we’re using we call it the end loop safety case framework named after our platform that, that framework we’re sharing with everybody who’s interested in learning about it.

Both, commercial customers any kind of stakeholder but also folks at state and federal levels so that we can, raise the level of safety. I don’t know where those regulations are gonna end up. I don’t know when they’re gonna end up, but if we can learn some lessons and provide that as feedback, I think that’s important.

Ben Lewis: One, one key thing I’d add too. To some degree from the regulatory side of things here, and then also for the standards themselves. The way I’ve seen it, I, I, so I come from the insurance industry. That’s my background and the way I’ve seen it in the past is that, those get looked at as checkbox exercises.

We would in, in my prior life when we look at autonomous vehicle developers at one of the big insurance companies in the us. You would have a list of these standards. I, so 2 66, 2 maybe, and some of the other ones. This was before 4,600 was published and, you’d be looking for those things, but it would pretty much be a conversation like, okay, we’re aware of the relevance of these standards.

Or in some cases where they existed were, we’re aware of these, the relevance of these regulatory frameworks. But the conversation would really just be like, okay. Do you adhere to ISO 2 62 62? We do. Okay. Check. Great. That’s awesome. And there, there really wasn’t a lot of depth there, and then there wasn’t a lot of auditing and verification around that.

And, what we’re trying to do is really flip that on its head, not, not just ask whether companies have a safety. Or maybe even get one level down and say what does it consist of as far as the key pillars. But actually, do that kind of analysis, hold it up against the the 4,600 framework and the in loop safety case framework and, do that assessment.

And then not only that, but we, we have named our safety case framework after enlo our platform for a reason. We would like to use our platform and our, engineering e. To check in on that company’s performance against that framework over time. So not just get like a snapshot of their level of risk or safety at one point in time, but really look at them and then, be close to the customer along the way so that we can recognize it changes in risk and safety over time and, we’re appropriate.

We can reward that if companies are experiencing improvements and risks or, their safety profile gets better. We actually want to be able to. Recognize that, and as Mike said, create a an incentive framework around that we can put to use. So I think this is a really big difference for us.

Like we’re going through this world where I’m just used to, very short snapshot, taking a look and at a particular point in time at a company’s profile at a fairly superficial level. And now we’re really looking to get a lot more depth in sort of dynamic dynamicism to how we’re looking at the risk of.

Michael: That’s interesting. I suppose you could also raise the rates on someone, I guess to put it in the commercial normal auto insurance type thing. I could be, if my risk was exceptionally high that day or you noticed that I had a tire pressure monitor alert my. Costs might go up that day or I don’t, I’m not sure how insurance works in that way, but it, at least I think, sounds like it would allow you some type of live monitoring to even detect risks in some way, maybe before the manufacturer was able

Mike Wagner: to.

Yeah. There, there’s gonna be a lot of innovation here. Because we’re going from a world where even just tracking the number of miles you’ve driven is a an insurance innovation to a world where we have all of these safety performance indicators being actively tracked.

We’re a startup. We’re taking one step at a time, right? But but you could imagine a future. The risk of a particular operational design domain or a particular route gets taken into account. All those things are, I think are the really interesting and in interesting to us as long as they’re aligned with, real safety standards and real safety outcomes.

Yeah.

Fred: We have long advocated for third party review of self-driving vehicles before they’re allowed to be on the roads. We’ve also advocated for a progressive licensing process in. You have increasingly critical examination of the vehicle as you expand the operating environment that is allowed for that vehicle.

We’ve looked at UL 4,600 as a great way to form a contextual basis for this third party review. And if the insurance companies can implement that’s great. But we, we think that this technology is akin to airplanes because, Pushing out something that is using technology to keep people alive.

that the people who are using it, the customers are pretty much unconscious of exactly what’s included in that and how to control it. So we think that there needs to be a third party to assure that there’s an adequate level of safety and consciousness. All of those things that you’ve talked about in the vehicle before it’s allowed to go out on the road.

We don’t think there’s ever any possibility that you can simply drive them far enough to make them safe.

Mike Wagner: Yeah I here on that I would agree with you completely. And we think that as an insurer we’re in an interesting spot. We’re, putting our money where our mouth is saying, Hey, these, what, these are what we think best practices are and we’re gonna.

Take a piece of this risk in exchange for you following that. We think that’s an important incentive. But I agree with you that it’s a piece of the puzzle, not the whole thing. I think we are getting to the point where independent assessors are gonna be able to take a look at the technology in a consistent way.

Just to be blunt about it, because I think we’re all fellow travelers and trying to get to the right safety goals here. A lot of people look at 4,600 and say, How the heck do you even assess that? It seems like, safety cases are gonna be different here, there everywhere.

This is where I think it’s important to bring in some of the roboticists because I know there are only so many robotics professors across the world. I know many of them, I don’t know, maybe a quarter of them, something like that. And they only teach so many ways to solve this problem.

And so if we can take it a level down and actually instantiate, again, a safety case framework that starts to lay some of these things. I think we’re gonna see in the next year or so the ability of some traditional certification organizations, with these kind of, completely independent certification credentials to, to be able to come in and make some progress about about where you’re headed there.

So I’m excited about that too, that I have to be a little bit circumspect, but I think we have a specific plan that Edge case is working on to enable that. And I think. Even independent of our insurance play here, I think that’s gonna be a really great outcome. .

Michael: And another thing about, having those safety performance indicators and having the safety cases built, is that I think it’ll might allow ultimately for better public transparency about the performance of these systems and might help, when vehicles arrive that can take you to California while you sleep, might help consumers who are lack the confidence in those systems.

And some of the hesitancy we see. From the public about avs at

Mike Wagner: the. . They, they should be hesitant, right? Yeah. Until it’s proven out. They absolutely should be. And there are important differences. Just to add on to what Fred was saying, between the history of aviation and the history here, these mostly if you’re getting on a plane, you’re the one taking the risk.

Now, obviously, the plane could crash into a school or something, but here, by definition, these cars are driving, they’re I had kids in at a summer day. In Pittsburgh and a couple of years ago a company who was testing in Pittsburgh sent a test vehicle up into the line where these kids were being picked up.

I know firsthand that it’s not just the guys getting in those cars that are taking the risk it’s the rest of us. Yeah. Yeah I couldn’t agree with you more. So how

do

Anthony: you imagine working with companies, doing regular software releases, whereas, that first version comes out, you’ve tested it, you’ve know, followed it, and then they’re like, Hey, you gotta do a hot fix.

Tesla does this regularly where they’re constantly putting out, OTAs. Are you imagining that you’re getting a feed of this and reviewing are you’re working, hand in glove, what’s. , what’s the realistic scenario

Mike Wagner: versus the dream scenario, yeah. Yeah. The dream scenario, again, I’m a technology guy.

I, I love autonomy. I would love to have a standards based processed, a process be accelerated so that you can push a hot fix in a day and have a hot fix that’s actually standard, certified and evaluated appropriately. We’re not there. At all. And so I think having expecting today to have the same speed of updates that you have from, non-safety critical software platforms is not realistic.

However there’s huge value in having hot fixes, right? I think the. Not only the quality, but a but the safety of this technology is gonna be improved by setting up a control loop, right? Where we say we’re learning we’re monitoring these safety perform safety, safety performance indicators or s PIs.

We’re monitoring them out in the world. We have a team with a robust safety management system that’s looking at any violations and looking for things that are unusual, analyzing that risk and coming up with a mitigation plan re, verifying that, that mitigation and fielding it.

Now that’s a whole long process but. I think it’s critical to get that a little bit faster and more efficient because that is the way that you can say, Hey, we realized there was this problem with our pedestrian detector. We spotted it, on Wednesday and we analyze the problem.

By Friday, we have a solution. We’re verifying it, and so we can get it out in a reasonable amount of time. For sure

Fred: but doesn’t it run counter to the whole idea of software validation? If you’ve if you get something out really quickly, it’s impossible to have completely validated that software fix.

So doesn’t it introduce as much risk as it solves?

Mike Wagner: it can. That’s why you need a robust multi-step validation process. So one of the things that I think is interesting is for the industry to come up with a way to really use simulation productively. And to have metrics that you’re tracking to help you trust some of the simulation.

And then having a process where when you push the hot fix, you’re pushing it in a very limited sort of way so that you. You can validate it in in vehicles with safety drivers and in that kind of responsible way. You don’t wanna take any software and just push it out to the whole fleet without an incremental process.

So that’s one of the issues that kind of needs to be, that’s a circle that, that, that needs to be squared to be able to do. .

Fred: So more a tepid fix. Don’t know how to

Mike Wagner: fix. Yeah, you’re right, . But you’re seeing that

Anthony: more as something internal to the companies developing the software as opposed to a third party, for example, you guys

Mike Wagner: doing that review.

I think everybody needs to be on the same page about how it works. That’s thing number one. What is, what does the process need? What does the process need to look like? Once you have that in place, then you can do a couple different things and and some of these things are things that Edge Case is doing in the insurance world, right?

One is, check on the health of a process. And this is something that a lot of industries have, right? Quality management processes and whatnot. Even in safety critical industries, you can go in and make sure that, an ISO 2 6 2 6 2 process is being executed correctly, that you know by trained personnel.

So you can look at the health of the process. That is super important. And to be clear, a lot of robotics people don’t know anything about that because they’re coming in purely from a technology, Hey, let’s look at the stats kind of background. And you can’t get rid of the process auditing side of things.

So that’s critical. But then also once you’ve agreed on the goals, then you can have these safety performance indicators and you can then report out, Hey, here’s where we are with all those different thresholds. This is why those thresholds make sense, because you’ve reviewed the safety case and now you can have, I think a pretty useful technical interchange as well, which is a big breakthrough.

Five years ago we wouldn’t have been able to have that conversation, but now we can, and you see

Anthony: the OEMs wanting to go along with this instead of trying to keep everything closed off and proprietary.

Mike Wagner: That is challenging, right? Part of the way Edge case operates is to provide direct value back to the developers themselves.

So if we come into this conversation saying, open your books, we’ll, we want to take a look around. You and I both know how that’s gonna work. And honestly that doesn’t sound like a very useful conversation to have unless it’s backed up again by regulatory requirement. Given the reality of today that, that doesn’t always fly.

But if what we say is, Hey, if you give us access to some of these metrics, some of these s PIs, if you let an independent certifier come in and check this the health of your safety management processes then we can provide you some feedback because we know. You’re still trying to solve these problems yourself, right?

You sometimes you don’t, you haven’t pulled your safety drivers yet, so couldn’t you use a little bit of our advice cuz we’ve been working with the whole industry and usually they do value that. And so they, we can get into a conversation with them that’s productive immediately with them and then that makes it worth their while.

Very cool.

Anthony: Are you guys providing insurance right now to. .

Mike Wagner: We are not right now, we are currently rolling us out, but we are working with pretty much the entire market segment that we’re starting with, which is the level four trucking world. And so yeah, we have active engagements with basically everybody in that space.

And so yeah we’re super excited to bring those all to to our our insurance world. And

Anthony: I imagine with trucking, there’s always gonna be a safety drive. At least in the foreseeable future, there’ll always be a safety driver and what do you, what are you doing? Do you have two type types of insurance?

In that case, when it’s an AV mode, there’s one type of insurance, and when the guy takes over the wheel, it’s a different type of

Ben Lewis: insurance. . Yeah, from a coverage perspective the coverage you provide via the insurance policy, it, it’s pretty much the same coverage at least today.

There’s pos certainly a possibility to do some innovation there, but, auto insurance is so heavily regulated that you basically need to carry what you need to carry

Mike Wagner: regardless of whether,

Ben Lewis: It’s a completely manual vehicle, there’s a safety driver behind the wheel or it’s a driver of this vehicle someday.

So the premise there is pretty consistent in terms of what gets covered. Under the policy, the, we do plan on having products that re. Those different scenarios and the risk level that might change between those scenarios. So that’s something we wanna do again because there is a difference there in terms of the, sort of the qualities and characteristics of the risk.

And then, also, we are pro, we’re proponents of autonomy generally, and, again if folks can develop safe autonomy, we’d like to be a sort of a backer, a supporter, a booster of that. So we’re trying to build insurance that, it is oriented towards getting fleets, further up to food shading when it comes to developing and deploying safe autonomous technology.

Very cool.

Mike Wagner: And

Michael: I also noticed when I was checking out your website that you do some work on a d a s or crash avoidance vehicles and safety cases for that. How does that differ from this, what seems to be an incredibly complex. Case for automated driving systems, when you step back a little or drop down a couple essay levels to level two type crash avoidance.

A d a S. Yeah. And driver

Mike Wagner: assistance. There’s a bunch of different a bunch of differences. Excuse me. But if we’re looking at a if we’re looking at a function that is taking the wheel and that is responsible for avoiding risk, and I gotta give a pointer to Phil Koopman’s levels of autonomy designation.

I think that’s the right way to think about it. Who’s in charge of avoiding Yeah. The accident, right? And level three, there clearly are. Functions, auto, automated lane change and things like that, that if not built the right way, are gonna pose substantial risk. Is it the same?

No, because they’re not constantly engaged and you can have all kinds of different caveats there, but fundamentally, at some point they get engaged, they’re responsible for doing everything. And so a lot of the same technical approaches apply. One of the things that we are doing, with our safety case framework, and again, we’re sharing this with everyone, so happy to dive into, details Here is is defining how to accomplish very practically some of the steps in the iso safety of the intended function standard.

So the soda standard 21 4 48. That’s a, that, that’s a standard written with a s in mind. And it’s a good way to, to think about hazard analysis for some semi-autonomous and autonomous algorithms. That’s an important piece of our safety case framework. That’s actually what you’re supposed to be doing in early simulation and in some of the tepid fix pre-release activities.

And so we can use all those same things for an eight as. Your risk acceptance criteria are gonna be different because your exposure’s different. Your concept of operations is different, but a lot of the pieces are still extremely valuable. ,

Fred: you mentioned iso, which for our listeners is a international standards organization and something that’s adhered to much more in Europe and other parts of the world than it is in the United States.

But it’s very important. One of the things that I’ve noticed is that Europe has a well-developed ethical case for what should govern autonomous vehicle development. There’s nothing like that in the United States and I would think. This is a very important aspect of the design process to make sure that you’re not relying on engineers who are not trained in ethical considerations to implement safety rules and regulations into software without an ethical, an ethical oversight, if you will, to make sure that those are consistent with values and experience that people need for safe transportation.

Is something that you guys address? .

Mike Wagner: Yeah. Part of our safety case framework looks at the risk acceptance criteria. So how you know when you’re done, again, how safe is safe enough, to give a shout out to Phil. I think you need to have that because otherwise you don’t really have a definition of done.

Yeah. There you go. Yeah. Read that book and talk and in particular look at the parts about. Ethical risk frameworks and risk goals because I think ideally that’s something that society chooses and then pushes down to be implemented. Now that’s always at a pretty high level and there’s gonna be a bunch of technical considerations.

But for instance, one of the things that. That the Germans are documenting, right? Is that and I’m paraphrasing here, so I’m sorry that I’m not quoting it exactly right. But basically, no, no subset of the population will be put at, undue risk, right? So this is like looking at bias in in any system.

And this is a huge challenge because, gosh, you look back at some of the big academic conferences in deep. And machine learning systems, and I think over the pandemic, I think in 2021 of the keynotes, the point of the keynote was to say, you guys aren’t just about the math. You guys are actually doing software engineering, and you need to think about the requirements and the implications of what you’re building.

I’m glad that somebody gave that keynote, but the fact that was a keynote tells you a lot. And like in that case, we need a framework to identify bias and to see whether we’ve mitigated it. Those things are incredibly valuable and I think that there’s a lot of work that can be done to get started today.

But once you get to scale, you’re gonna see pedestrian detection algorithms, for example, that are better at detecting humans that walk on two legs versus humans that w that work in a wheelchair. These things are gonna appear and part of why looking at the data is so important is that’s how you’re gonna find these biases.

But you gotta start with these goals. You can’t work your way out of those.

Fred: And you gotta record the data as well, and that implies that you’re going into the system in the car or wherever the data’s being stored, and you have the ability to discriminate that data and interpret it.

How does that work?

Mike Wagner: Yeah it’s really challenging. There’s a bunch of statistical approaches you can use to find the problem. , to identify. So this is where the safety performance indicators are so valuable. If you have and this is the way we roll here. If you have a bunch of simulation data or a bunch of early testing data and you’ve monitored these safety performance indicators very closely, then essentially you have a statistical model.

And you look at that model and you say, do I have any subsets of this of these scenarios where maybe I have a much higher risk than on average? Because safety’s not about the average risk. It’s about looking at all these different specifics and turning them into hazards and then mitigating them.

So you can use a bunch of known statistical techniques to find that you have an issue, but usually diagnosing where that. lies, like what causes it and how to mitigate it. That’s where you need the safety engineers the very traditional safety engineers. Finding the root cause is always going to be, at least for the time being a human involved activity.

And then, how you mitigate it. A lot of the robotics people think that every mitigation needs to be in the machine learning system itself, but sometimes the best solution is a systems approach or to limit the operating. .

Fred: The other thing I’ve been thinking about is that I don’t believe any driver or automated control system Even human driver will ever have complete information about the state of the vehicle, its environment or its trajectory.

So somehow there’s a stream of consciousness coming into either the human or the machine. It says here are the inputs that are coming in. Here’s how the sensory information. Now, a human relies upon experience, judgment, and ethics in sorting through this. And figuring out what’s the important thing to recognize in this particular circumstance?

Humans have the ability of pulling in a lot of data that simply is not available to autonomous vehicles. And I wanna think about, and I noticed this when I go on driving, is something like eye contact when you’ve got several people coming together at a stop sign or several people coming together at a rotary.

Eye contact becomes incredibly important for your decision as to whether or not you’re gonna proceed. And I don’t see how machines will ever be able to detect much less, interpret this kind of really personal, fundamental, biological information that we’ve all grown up with and just is organically part of who we are as human beings that are essential to the safe control of a moving vehicle.

as much energy as as we’ve discussed before, is a hang grenade, right? Yeah. Yeah, it’s, it that’s a really difficult problem and I don’t see a solution to that anytime in the next thousand years or so. .

Mike Wagner: Yeah, so again, coming at this from the perspective of an autonomy person, I’ve seen a bunch of progress in my career over the past 25 years.

At this point, the fact that we can do pretty accurate pedestrian detection at all is pretty shocking. If you go back and look at 1998, Mike, who by the way, had a lot more hair and a lot he, yeah, exactly. He never would’ve been able to expect this kind of level of accuracy of detection.

Honestly, speed. I remember when the grand challenge happened back in the early two thousands. I was working on a vehicle that was driving a little less than a meter per second. And then the task was to drive across the desert at 35 miles an hour. Yeah. So I’m a little bit facetious there because I also completely agree with you.

There are some very hard things that I don’t have a solution for. . I think if we do two things, number one, if we set up what the safety safety performance indicators look like and what the expectations are there, then we’ll know whether someone succeeded or not because I think it would be great if someone could.

But secondly, in the meantime, cuz I agree with you, it’s gonna take. Maybe I’m, maybe I disagree on orders of magnitude, but it’ll take a long time to get some of those things. But we can also come up with deployment configurations that reduce the risk from getting those things wrong.

And so I don’t like to talk about, any of our particular customers. But some of our customers are working on. More limited operational design domain. So where you have maybe a truck convoy and you have the automated truck is following the human driven truck. And I’m not saying that’s better or worse than anyone else, just to be clear, but that’s a good example because then a lot of the uncertainties can get handled a different way.

That’s not just technical. There’s still a huge amount of risk there that needs to be mitigated, but I think that there are some interesting interesting solutions to get started. And then

Fred: We do a section called The Tower of Fred, and I’m not sure if there’s any real good entry point for that, but what about right now?

Anthony, introduce

Anthony: yourself.

Mike Wagner: There you go. You’ve now entered

Fred: the Dow of Fred. I’m gonna keep it really short this time, so I just wanna pause it. A few. Observations to you guys? I think that there are a lot of safety issues that are absolutely unique to avs and when you get up to a highly automated level that are simply not apparent or nothing you can ever drive through, nothing you can ever really appreciate unless you look at the full context of the automatic driving.

So I got a list of ’em here. I’ll just read through. Fairly quickly. The establishment and implementation of the ethical framework for algorithm development, we’ve talked about that. I don’t think that’s really necessary in level two or below because you’ve got a human being that’s at the controls that can compensate for whatever shortcomings there might be, but at higher levels, that’s absolutely essential to do that.

An establishment and an implementation of safe harbor conditions at all points within the o d. , I don’t even know what that means. And some people have talked about, some people have talked about minimal risk conditions and we’re changing that to mitigated risk conditions and all that. But somehow there’s gotta be a safe harbor at every point in the O D, so that if something unanticipated happens, you’re not gonna sacrifice the people involved in.

Anthony: What, I gotta jump in there. How does that work? Cause I, I think about that a lot since we’ve talked about this and I’m driving on, two-lane highways where there’s no median, there’s no shoulder, there’s nothing like what do you imagine happened in that case? It’s not gonna be like the Tesla crossing the Bay Bridge where it just hits its brakes.

Like how is something like that work in a scenario where there’s no obvious

Mike Wagner: exit? . Oh, yeah. Yeah. So it’s I’m not gonna, I’m not gonna pretend to say that there’s one answer to that. Again, it does depend on what kind. Mission, what kind of operations you’re trying to accomplish there.

Whether you’re, taking some sort of signaling approach and say, Hey everybody allow me to pull over whether you go from some sort of more advanced convoy to something that’s much more just like a software driven linkage between two vehicles. It does depend. . I agree that it’s a huge challenge.

I also think not to play too much startup, c e o and get back to the messaging I like but to do that, running an insurance company, we’re gonna be able to look at like a bunch of these different statistics. And I think some of the answers to what a good safe harbor or minimal mitigated risk condition is come down to, like how things work in the real.

Engineers can predict these different things, but we’re gonna be able to take a look at ’em and we hope to feed that back. Okay. And I don’t wanna say,

Ben Lewis: go ahead. Sorry. Somewhat flipping addon, it’s maybe I think of this these scenarios as two heads are better than one.

And particularly if those two heads are C M U educated computer engineers at one of the, these AV developers the way I think. What would we have to do with, if we look at a whole cohort of human drivers navigating that scenario, we’d have some people that navigate that quite well.

And we’d have some people that navigate that pretty poorly. And that’s I guess where we’re coming from, ensuring human drivers. And unfortunately, you’re going to get all of that in your, the mix of your sort of book of. And with the AB codes, we can, work to understand how an organization approaches that problem.

And, we’re, I do have generally some pretty good confidence in these organizations full of smart people that are putting a lot of thought on how you do that well. As opposed to me looking at everybody out on the road and getting everything from the worst of the best. And then, having to absorb whatever comes out of.

Fred: Oh, that’s good news. , one of the other issues I’ve thought a lot about is command authentication. So say you’ve got three teenagers in the car and the only thing you need to do to control the car is to point to the map, right? Because I know there are cars to do that. So you’ve got somebody who wants to get to school cause they’ve got an exam and you got somebody else who wants to go surfing because they’d rather like to skip the exam.

, if you have voice activated controls, how do you know who the who the right person controlling the car is? How do how do you make sure that the person controlling the car is in fact the person who should be controlling the car? These are issues that I think are very important and not well addressed.

Mike Wagner: I agree with their importance and the fact that they haven’t been publicly addressed much, I think. I think it’s interesting. as I’ve taken my journey as a as someone interested specifically in the robotics and autonomy technology, to keep I identifying situations that come down to product design and product like thinking rather than just like technical thinking.

Because yeah I think of your question as a fascinating technical challenge. How can I, Michael’s voice from Fred’s voice. That’s cool. Let’s think about it. That’s an important piece of the puzzle. But also there are like just, ways of building the device and kind of UX approaches that, that help with that.

Or, again, like Edge case is starting with trucking and they’re, the challenges, I’m not gonna say are gone. But they’re certainly different. So we need to think about the solutions in a context dependent.

Ben Lewis: I look forward to a solution to that because I have the Amazon Alexa system, and whenever I tell it to raise the temperature, my girlfriend typically tells it to lower the temperature again.

So if it could just prioritize my answer at all times I appreciate

Mike Wagner: that. Or say somebody goes on the radio or Spotify song or whatever is like, all vehicles stop now. And then, like that’s an issue too. This is not easy. All right.

Fred: Alright. A lot of stuff can happen.

Rather than go through the whole list, I’ve got, I’ll try to hit a couple of high points. I talked earlier about my belief that it’s, you’re unable to drive through a level two and come up to a safe solution for level three. And one of the parts of that is that failure mode affects criticality analysis.

That’s a technical term that military people are familiar with, but it does basically what it says. How do you analyze all of the failures that are possible to assure human safety? And anything that you’re trying to promote from level two to level three or four you’ve gotta do a FICA on those automatic functions so that you can make sure.

They’re gonna be safe when the fact is you’re pulling the human back up out of the system. , is that a problem that’s recognized or is that something that is already being addressed?

Mike Wagner: Yeah, no, that’s that’s a pretty important problem. I think that getting to your sort of underlying theme, which is I think really interesting and important of driving through, getting from level two to level four. It’s it’s certainly very d. I don’t know, safety goals that you would have different risk thresholds to have. And you can’t just say, Hey, we’re like . We’re going from a system that’s mostly, or that’s doing great. If it’s preventing nine out of 10 collisions to one that is ridiculously unsafe, if it is preventing nine out of 10 collisions.

That’s one way to think about going from level two to level four. But there are some, I. Technical approaches that that I think can be taken. So if you have a level four system, it’s going to have in its stack for instance, a perception function, and you will have a perception function in your level two system as well, probably.

It might even be that the basic design of those things is similar. Maybe the level four system is a whole bunch more of them and many diverse versions of them. But I think there are some useful things you can do with what’s called reprocessing, where you take a bunch of data that you’ve logged before and you feed it back into.

New versions of say a sensing algorithm or a perception algorithm, and you calculate statistics on an ever-growing corpus of data. I think if done right and to be clear, there’s a bunch of ways of doing it wrong. But if done that is a powerful technique to be able to build up true evidence of some level of risk and accuracy.

But those things don’t work so, reprocessing, doesn’t work so well for control algorithms because there you can’t just test them open loop. You have to, have a closed loop simulation or some sort of test that actually closes the loop. And there, minor changes make all the earlier data invalid.

So I. If I can give free advice to the listeners on this podcast, I would say think about your system that way. Think about the pieces that you can continuously, validate in an open loop kind of way. Build a validation campaign that, that takes advantage of that. And for those that can’t, don’t fool yourself into thinking you can still do it.

Come up with a legitimate campaign that. Or maybe not even a validation campaign. Think about like formal methods and mathematical methods to prove out your controllers and planners because that might be a more efficient and also accepted way of getting to an appropriate level of safety.

Fred: Thanks.

And for our listeners, level four just to remind you, is a vehicle. That is able to operate under its own control for extended periods of time and does not require human intervention in the event of a fault or an unforeseen circumstance. That’s the model that we’re looking at. Whereas level two is the car that you’re driving today with some automatic features that can help you drive, but not intended to be automatically operated for an extended period of time.

So the final point I’ve got is there’s gotta be some way for automated vehicles to respond to police commands and exchanging information after a crash, which certain amount of that is inevitable. But every police interaction with a vehicle is novel. It’s unplanned. It involves circumstances that the vehicle has never seen before, so I think it’s very difficult to train a neural system to respond to these necessarily spontaneous and unforeseen circumstances.

Any observations on that?

Mike Wagner: We’re just starting to see real data on that. And so I think the well pun may be intended. The jury’s out on, on how well that’s all gonna work. ? I think we do there’s the, there’s a lot of information that’s meant to be provided to first responders about how to deal with the system.

If you if you dig into that a bit, it tends to be like, how to cut it apart and not shock yourself. I think or what the switches do. I think if if industry organizations wanted to look at the roadmap of next important things to work on. And I’m, I’m looking at an AV s c or whatever, an s a e looking at de defining those things to a high level of detail, I think would be great.

They’ve done a lot of work on how to do testing with safety drivers. That’s an awesome standard. I think I think the same kind of thing would be a great next. And

Anthony: when my ABV gets pulled over for bad driving from an insurance perspective, who’s getting the ticket? Okay. ?

Mike Wagner: No really insurance?

They insurance

Ben Lewis: provider. I won’t be, I won’t be out any tickets, but I think today it’s, it the truck the trouble is it’s pretty murky, right? Like again, the level of insight. that makes its way back to an, a stakeholder like an insurance company is pretty limited.

And so the granularity there around, what, which driver was I, in, in the hot seat at any particular time is not fully realized today. And that’s why we’re interested in breaking things down. By mile, looking at kind of each mile in each instance and who was the driver at that point in time so that we can properly recognize where the risk is coming from where it’s stemming from.

Is it a human if there’s a safety operator in the vehicle and they have a role in what’s playing out, or is it the automated driver? It’s an interesting question on the ticket side, I guess it gets back to the regulatory frameworks that are out there and the fact that there’s still some development there that needs to happen.

Anthony: Okay. And so lastly, my car’s lease is up in April. Can I have an AV then and will you guys insure it for me? Be drunk the entire time. , it’s not a no it’s not an

Mike Wagner: 18 wheeler, unfortunately. No. I don’t know. I never like to turn away business so let’s talk after the show. How about that?

Okay. That sounds good. So

Fred: how can people best reach you, Ben and Michael? .

Mike Wagner: You can go to our [email protected]. That’s echo charlie romeo.ai. You can also email info at that address. And we have a LinkedIn page as well.

Fred: Great. Thanks. I’m sure people will be, I hope they

Mike Wagner: are. Yeah, it’s been busy.

It’s certainly a fascinating time. 20 23, 20 24 is when a lot of our customers are gonna start to really go to market at bigger and bigger scale. So we’re happy to be on this podcast talking about this and hopefully encouraging folks to think about it as well, cuz we’re gonna need some answers here pretty soon.

So we really appreciate you guys being interested in this. Thanks for coming on.

Michael: Yeah. And super interesting and fascinating subject. For us to discuss and we wish you the best and thanks for joining us today.

Fred: Yeah, great to have you on really good insights. Appreciate it.

 

Join the discussion