For a better experience on, enable JavaScript in your browser. Thank you!

Achieving Safer Autonomy with ISO 26262 and Simulation-Based Testing

Published: September 19, 2018

Jace Allen, Lead Technical Specialist - Simulation and Test Systems, dSPACE Inc.

One of the most critical steps to achieving full autonomy is ensuring the functional safety of the vehicle’s systems for automated driving. In an “eyes off, hands off” scenario, there is no tolerance for error. But when it comes to autonomous operation, how will safety requirements be validated.

Testing at the vehicle level is not enough to ensure safety, but the ISO 26262 software safety standard might be a viable option. 

By complying with the ISO 26262 V-cycle development process, automotive manufacturers and suppliers can lay the groundwork for developing safety systems for autonomous functions. To achieve full compliance with ISO 26262, it is necessary to test and validate the software and hardware in a systematic manner that includes planning, specifying, executing, evaluating, and documenting tests of requirements.

Figure 1: Components of an ISO 26262-compliant validation process.

Simulation is Critical

The role of embedded software is paramount for an autonomous vehicle as it controls how the vehicle senses the environment. On-board computers collect real-time data, which is used by the autonomous vehicle to make smart decisions, supply immediate feedback to passengers, and minimize risks. 

The ISO 26262 standard stipulates that the role of simulation is critical in validating system behavior and recommends simulation at all levels. The advantage of simulation is that tests are reproducible and it supports the capability of testing beyond performance/endurance limits and dangerous situations.

ISO 26262 proposes model-in-the-loop (MIL), software-in-the-loop (SIL), and hardware-in-the-loop (HIL) simulation for conducting software safety requirements verification. All of these simulation processes (MIL, SIL, HIL) can be applied towards the common goal of generating autonomous vehicle requirements (reference figure 2). 

Figure 2: ISO 26262 recommends MIL, SIL, HIL testing and simulation for Software Unit (Component) Testing (6-9), Software Integration and Testing (6-10), Verification of Software Safety Requirements (6-11), and Item Integration and Testing (4-8).

To satisfy software safety requirements verification, ISO 26262 recommends:

Software Unit (Component) Testing (6-9): For testing single software components, the standard highly recommends the following test methods: requirements-based testing, interface tests, fault injection tests, and performance testing.

Software Integration & Testing (6-10): The same test methods (requirements-based testing, interface tests, fault injection tests and performance testing) are recommended for application software and basic software integrated in subsystems, as well as the integration of application software for distributed functions.

Verification of Software Safety Requirements (6-11): These tests, which represent the last milestone in software development, have to be executed on the target hardware to verify that the software is operating correctly. Therefore, an HIL simulator is absolutely necessary. Using HIL simulation will be even more important with the second edition of ISO 26262, which highly recommends using HIL simulation for even the highest automotive safety integrity level (ASIL) classifications.

Item and Integration Testing (4-8): ISO 26262 does not provide a clear definition of the concept of the system and offers the freedom to use various project variants to apply required testing methods. The automotive industry is aiming to virtualize more and more steps and increase the amount of simulation, so that the maturity of the individual components in the integration test is as high as possible and the integration test can be successfully completed as early as possible.

For every software tool which is used in a safety-critical project, a classification of the required level of confidence has to be performed. For some tools, which are highly safety-critical, qualification is necessary. Certification of the software tools can support this process of qualification.

Safety of the Intended Function (SOTIF)

A special work group under ISO is set to release a specification called SOTIF (Safety of the Intended Function). The standard intends to provide safety guidance on autonomous vehicle functions and will include the following:

  • Detail on advanced concepts of the autonomous driving (AD) architecture
  • How to evaluate SOTIF hazards that are different from ISO 26262 hazards
  • How to identify and evaluate scenarios and trigger events
  • How to reduce SOTIF related risks
  • How to verify and validate SOTIF related risks
  • The criteria to meet before releasing an autonomous vehicle

Watch for the SOTIF standard in late 2018.

Defining Requirements for Verification and Validation

Jace Allen, Business Development Manager – Simulation, Test and EEDM at dSPACE Inc., is confident that at some point, companies involved with development processes for autonomous systems will be audited for conformance to ISO 26262 or a similar widely-accepted standard. But currently, the focus is on establishing requirements.

“What are all the requirements of an autonomous system?” Allen said. “No one knows right now. That’s why the development of a verification and validation process is huge. Current in-vehicle testing is all about finding the critical test scenarios, and thereby, defining those needed scenario requirements.”

When you consider the massive amount of tests that are required to validate the functionality of an autonomous vehicle, it’s unrealistic to accomplish all of these tests on the real road. This enormous undertaking may, however, can be achieved using a combination of:

  1. Virtual simulation testing (i.e., MIL, SIL HIL)
  2. Test benches enhanced by simulation
  3. On-the-road field testing

Standard simulation techniques may not prove sufficient for the amount of testing required, given the lack of simulation and test resources. In this context, it is critical that robust statistical testing strategies are used to expand the amount of coverage for validation. This can involve integrated probabilistic testing strategies and even advanced AI algorithms to manage the testing process and help ensure greater coverage over areas to reduce unreasonable risk.

Sensing the Environment

For many OEMs, a good starting point for generating requirements is in the driver’s seat. Engineers are sending out vehicles equipped with sensors and data recorders to capture real-world situations (reference Figure 3).

The data that is gathered on a field test is brought back to the lab, where it is downloaded to a simulation system. Using the data collected from one scenario and automation tools, an engineer can play out thousands of different variations to understand many different possibilities. The SOTIF specification, still under development, will provide guidelines for such testing.

Figure 3: To validate functions for autonomous driving via sensor simulation, data is generated from multiple sources (e.g. cameras, lidar, radar, ultrasound, V2X, GNSS, etc.) to create specific test scenarios.

Residual Risks

In a world full of uncertainty, what is the likelihood that something will go wrong with autonomous vehicles? And if something does go wrong, what would be the consequences? To understand the impact of residual risks, you have to first identify risks and then evaluate these risks to determine a risk management plan of action.

Risk can be grouped into two categories: Known risks and unknown risks.

An acceptable level of system safety requires the avoidance of unreasonable risk caused by every hazard, including limitations of the controlled system. To determine an acceptable level, an evaluation has to be performed of the residual risks, both known and unknown.

Known risks can be evaluated using requirements-based testing. But the evaluation of unknown risks requires a suitable combination of testing approaches (i.e., requirements-based testing and scenario-based testing) and using stochastic techniques.

Verifying Known Risks

A typical process for validating known risks involves the following steps:

  1. Requirements and test goals are reviewed.
  2. Test specifications are generated.
  3. Test cases are designed – including parameters (i.e., preconditions, data setup, inputs, expected outcomes and actual outcomes).
  4. Tests are executed and documented.
  5. Test results and test coverage are verified.
  6. Any issues uncovered during the testing process are resolved.

It is important to repeat tests and manage the process from MIL to SIL and HIL with full traceability. With regards to ISO 26262, it is especially important to have a structured test process that is highly reliable, particularly for creating and designing test cases.

Verifying Unknown Risks

When it comes to validating the unknown risks, there is a fundamental difference. These tests have to be based largely on probabilities. Requirements-based testing is still carried out, but it is complemented with scenario-based testing.

With scenario-based testing, probable scenarios are defined, validated, and verified using automated testing. Scenarios are replications of possible situations that could occur in the real world (i.e., a vehicle merging from a two-lane roadway down to a single lane in heavy traffic situation), and they typically involve the interaction of external elements with a system (i.e., effect of heavy rain impeding visibility of forward-looking camera).

“You can’t build or program a million scenarios manually,” said Allen. “You have to use automation tools to create those scenarios. And then you have to run those scenarios through an intelligently managed and automated testing process.”

Modeling Test Scenarios

To create a test scenario, a scene has to be replicated virtually using high-fidelity models and simulation tools. A test scenario is implemented in the simulated environment using the model elements such as vehicle, vehicle environment sensors – radar, lidar, GPS, HD maps, etc., road, road traffic, various traffic participants – pedestrians, bicyclists, signs, and their expected behaviors. Once a test scenario has been modeled, the real benefit is that it can be translated into endless different test cases by managing parameter variations.

Engineers that need to simulate a lot of different test scenarios do not have to specify each one manually. Many scenario descriptions can be imported using resources and standards such as Open DRIVE, Open SCENARIO, and Open Simulation Interface (OSI), among others. Engineers can also use their own captured sensor data from on-road vehicle testing. The test scenarios thus defined can now be used to validate various autonomous driving algorithms.

Validating Autonomous Driving Functions

If a fault or safety risk is identified, ISO 26262 requires that the potential risk be evaluated to define a top-level safety goal. Subsequent parts of ISO 26262 provide requirements and guidance to avoid and control the random hardware and systematic software faults that could violate the safety goal.  ISO 26262 further defines a standardized approach to allow the iteration from unit to function to system integration testing.

To validate the autonomous system function algorithms, scenario-based testing is typically performed throughout the development process. Testing methods in the validation process include:

  1. PC-Based Simulation using Model-in-the-Loop (MIL) and Software-in-the-Loop (SIL)
  2. Hardware-in-the-loop (HIL) Simulation
  3. Test Benches
  4. Field Tests/Real Test Drives

To support these test methods, dSPACE offers a complete tool-chain including models, in-vehicle development and data acquisition platforms, PC based simulation environment, HIL test systems, mechanical test benches as well as support of entire test plans including on-road testing through a data and test management solution. The dSPACE suite of real-time simulation models can support a wide spectrum of simulations. With dSPACE Automotive Simulation Models (ASM), engineers can create entire virtual vehicles, road networks, traffic maneuvers, traffic objects, sensor models, vehicle dynamics, and more. With standardized interfaces, the properties of ASM models can be readily adapted to individual projects. ASM also provides open interfaces for importing scenario information and automating the permutation of scenarios. These models together with additional hardware and software tools are then used for various autonomous driving systems with testing methods described above.

PC-Based Simulation

Testing can begin at the very early development stage using PC-based MIL/SIL simulation in the safety of a lab. Using models (i.e., function models, bus systems, vehicle models, etc.) and virtual electronic control units (V-ECUs), engineers can test individual vehicle functions, vehicle systems, and even entire virtual vehicles independent of the specific simulation hardware.

The dSPACE VEOS software-based simulation platform enables engineers to perform a broad range of simulations on a PC system. With VEOS, you can execute tests faster than in real time. This enables high test throughput through fast test execution. Furthermore, by clustering PCs together, you can run a large number of simulations in parallel. The PC cluster is controlled by one central unit that schedules the execution of the simulation jobs and test cases.

One critical aspect of this type of MIL/SIL testing is that VEOS provides the same open standard interfaces used for HIL testing. So all of the same test assets and tools can be used (scenarios, models, tests, etc.), which provides the necessary synergy and reuse, given the daunting prospects of the amount of testing that must be performed.

Using PC-based SIL simulation, engineers can begin testing in the very early development stage – even before HIL tests or field tests/real test drives have been conducted. It also provides a way for simulation and testing prior to hardware release for the ECU or any other missing components in an integrated system by using virtual ECU functionality.

HIL Testing

HIL simulation testing is an excellent approach for validating the data fusion process and algorithms used by the autonomous test vehicle’s real sensor systems (i.e., cameras, radar, lidar, ultrasound, etc.). HIL simulation testing is deterministic, repeatable, cost effective, and can be conducted 24/7. Furthermore, the basis of the HIL test results make it possible to reach quality goals and verify software safety requirements.

Additionally, it is also necessary to use HIL testing to get to true “real-time”, with all software and hardware aspects of the system considered. This is a core principle of ISO 26262 and is an established standard methodology for ECU software validation in the automotive industry.

Simulation requires creating models to replicate the sensor behavior (i.e., cameras, radar, lidar, ultrasound, etc.) on the autonomous vehicle. Data obtained from V2X, GNSS, and maps must also be modeled to provide a complete picture of the vehicle’s surroundings and its ability to recognize the environment. The structure of the sensor systems are distinguished as function blocks, and sensing is based on detection points that are determined by physical measurements, optics, wave propagation, etc.

In the HIL environment, it is also plausible to use real sensors in the test setup. These sensors then have to be stimulated appropriately in order to produce data/information that replicates their behavior in the real world being depicted in the test scenario. dSPACE has developed special solutions to stimulate sensors in test setups through a specialized interface – Environment Sensor Interface (ESI). The ESI units are capable of stimulating multiple types and numbers of sensors synchronously to maintain real-time integrity of the entire test environment.

Therefore, with the dSPACE tool chain engineers have various options for test setups of varying capabilities to match their testing goals as shown in the following diagram.

Figure 4: Simulation system structure for closed-loop testing.

Test Benches

When it is necessary to include mechanical components of the actuation systems in the HIL simulation testing environment, a test bench-HIL integration offers a good solution. The test bench supports testing of real mechanical limits, provides hardware quality assurance, and enables the inclusion of integrated sensors and actuators.

Some common examples of applications that can be tested on a test bench include: radar sensors, integrated sensors, electric power steering, electric braking and rear-wheel steering.

Requirements for a mechanical test bench include:

  • Entire HIL environment (software and hardware)
  • High-performance, low latency HIL coupling with the test bench control systems
  • Application-specific load actuator dynamics
  • Electrical setup with low noise connection of measurement signals

A test bench for sensor stimulation requires smart sensors with a bus interface (e.g., CAN, LIN), position sensors (e.g., shift lever), angle sensors (e.g., steering), and the ability to inject relevant physical behaviors (e.g., yaw rate, acceleration), including required degrees of freedom.

Field Tests/Real Test Drives

While virtual simulation testing greatly speeds up testing and provides better coverage of the multitude of test scenarios that autonomous vehicles will have to operate in, field tests/real test drives are still a must to evaluate real driving conditions and to verify test results obtained through virtual testing.

After the functionality of sensor systems and algorithms have been proven in a simulated environment, test engineers go out in the field to perform real test drives to test autonomous vehicle performance in actual weather conditions (i.e., rain, snow, ice, fog, night driving, heat, cold, etc.) to provide final tuning and validate how sensors will handle real-life situations.

Managing Test Data and Process

For each method of testing and every traffic scenario that has to be carried out to validate autonomous driving functions, a massive amount of tests and data will be generated. Ultimately, this information will have to be managed.

From a data management point of view, the main tasks and challenges are to:

  • Store and manage the data (ensure availability)
  • Connect and link the data and store relationships (create traceability)
  • Provide APIs and open interfaces for test and process automation
  • Enable asset reuse to quickly test and validate new ECU software versions using previously run scenarios and tests
  • Share the data and results within your team and organization

To meet ISO 26262 requirements for documenting tests of safety requirements, dSPACE offers a data management technology, called SYNECT. SYNECT provides out-of-the-box support for managing test cases, executions and test results. The software system is flexible and customizable, and can be used to manage test scenarios and other assets related to the testing process, such as models, parameters, etc. Additionally, it provides the necessary options and interfaces for the automation of the process and workflow. It provides open interfaces and standards to automate many COTS test tools and also to connect application/product lifecycle management (ALM/PLM) systems to the model-based development process.


The validation of functions for autonomous driving is dependent on the definition of requirements. The ISO 26262 standard advocates the use of model-in-the-loop (MIL), software-in-the-loop (SIL) and hardware-in-the-loop (HIL) simulation methods as part of the process for verifying software safety requirements.

It is impossible to test every conceivable scenario of driving autonomously on the actual road. The vast majority of test scenarios will have to be modeled and then tested using a combination of MIL, SIL and HIL simulation, test benches, and field tests/real test drives.

To help our customers design a process that is compliant with ISO 26262 and SOTIF, dSPACE offers consulting services and an associated tool chain that is based on open-standards and application programming interfaces (APIs) allows for automation of the test generation and testing processes, providing a distinct advantage for adapting advanced probabilistic testing methods. The dSPACE tool chain also provides the necessary data management capabilities that are vital for handling the vast amounts of scenarios and data that will be produced from the sheer quantity of testing that will be required to validate autonomous driving functions.