The adoption of sophisticated technologies for autonomous vehicles, advanced driver assistance systems (ADAS) and electrification are accelerating the implementation of new software at an unprecedented pace. As the mobility industry shifts over to the cloud-based era of software development, one core question remains: how do you ensure that safety-critical systems stay safe?
Experts from the software development field discussed the ever-growing challenges of safety-related software applications at an Automation Alley Tech Takeover event held Oct. 2, 2019, in Troy, Michigan. The panel discussion was led by Model Engineering Solutions (MES), dSPACE and kVA by UL.
Industry Standards and Best Practices Are the Building Blocks for Safe Automotive Software Development
MES CEO Dr. Heiko Dörr kicked off the discussion by talking about the importance of embracing industry standards and best practices. In the development of safety-critical systems for autonomous vehicles and other advanced technologies, he said the best place to start is by having an underlying development infrastructure in place that meets the requirements and recommendations of standards such as ISO 26262 and SOTIF, as well as general quality management principles (i.e. ISO 9001).
In developing a safety-critical system, Dörr said the first thing engineers always want to know is what to do? And how much to do? He said the answers to those questions can be found in the standards.
“Any embedded system that you design is going to have faults, errors and failures,” said Dörr. “The major challenge is that you are able to compensate for those failures by introducing countermeasures to prevent hazards from occurring.”
The overall goal of ISO 26262 is to safeguard E/E systems from dangerous, systematic and random failures. Dörr said one of the first activities that needs to take place is a hazard and risk assessment. Depending on the results of the assessment and the level of risk identified, the standard specifies requirements for design, development, verification and validation, production and documentation.
Safety Integrity Analysis
ISO 26262 has defined a classification system known as the Automotive Safety Integrity Level (ASIL). The system establishes four classification levels for a potential hazard, from low to high, with the high category being the most severe. It takes into account the severity, probability, and controllability of a hazardous scenario.
To help determine how a hazard should be classified, Dörr said the following questions should be considered:
- How probable is it for the hazard to occur (frequency/exposure)?
- If the hazard were to occur, how severe could the outcome be (severity)?
- Is an injury unlikely?
- What kind of injuries could result?
- Are life-threatening injuries possible?
- Have any fatalities been associated with the hazard?
- Is historic data available?
- Can the hazard be avoided (can the situation be controlled)?
- What preventative measures can be taken?
After completing the hazardous risk assessment, system design activities need to ensure that faults are properly detected and can be controlled or mitigated at the system, hardware and software levels.
Developing Safety-Critical Software for Autonomous Vehicle Systems
Designing a safety-critical system is challenging in itself, but how do you design a system that no longer has a driver to fall back on for safety? And how do you prove that it is safe? dSPACE Inc. Business Development Manager Jace Allen talked about the complexities of validating an automated driving system.
He said that the SOTIF standard separates system functionality between Safe and Potentially Hazardous situations with Known and Unknown scenarios. The basis of ISO26262 is to mitigate the potentially hazardous/known behavior with requirements-based testing and establish system robustness with standard test processes.
The greatest challenge lays in how to test and validate “unknown” potential hazards (see figure 2, Area 3: “Black Swans”). While unknown scenarios are the most difficult to validate, Allen said the task can best be addressed using scenario-based testing (ScBT) to massively expand the scope of test coverage and apply more advanced “smart”-testing techniques.
Scenario-Based Testing (ScBT)
An automated driving scenario, in itself, isn’t a test. It is a logical description of how things operate in the environment. When it becomes parameterized, however, it can be used for testing purposes by applying analysis algorithms. ScBT is about parameterizing driving scenarios into test cases for the purposes of further mitigating the risk when validating the behavior of an autonomous vehicle.
To perform ScBT, a driving scenario is either designed or captured from somewhere … raw sensor data (camera, radar, lidar, etc.), a scenario library, or a third-party source (OpenScenario). A test case is extracted from the scenario, using a data-driven development process. And then the test case is run in simulation environment to recreate the scenario, typically on a SIL or HIL test system, or with SIL in large-scale parallel execution in the Cloud.
For each test case, the actors and sensors are run through the scenario, test data is captured and then analyzed to determine if the autonomous vehicle software is performing as it should and/or if the software needs to be modified.
One of the greatest advantages of ScBT is that once a test case is created, it can be tweaked (i.e. the position of an oncoming vehicle is changed, there is a change in the lighting, etc.). This saves a lot of time and effort in having to create a new driving scenarios from scratch. It also allows for stochastic approaches to simulation that can significantly increase coverage of the key test areas and provide mitigation of potential risk.
With so many conceivable driving scenarios to play out, Allen said it is important to establish test processes that are automated, that allow for reproducible tests, that can run in the cloud, and that have traceability of test processes and test assets. It is also critical to have test systems that utilize similar approached, in order to maximize re-use and reduce overall cost of validation.
“As different scenarios are developed, you may have to go back and run more test cases,” Allen said. “That’s why you need a flexible system with redundancy and traceability built into it. If there is a point of failure, then you have to be able to go back to the test case to reproduce it and solve it.”