Published: December 14, 2020
Bassam Abdelghani, M. Sc., Product Engineer Automated Driving & Software Solutions, dSPACE GmbH
Since the dawn of motorized mobility, safety has been at the center of human attention. We can find early concerns dating as far as 1908, addressing the risk of motor vehicles on public roads. The dominant concern back then was the driver behavior. In other words, how should the driver behave to ensure the safety of vulnerable traffic users, and, more importantly, which regulations must they comply with.
Moving forward a full century and we are faced with the exact same questions that challenged the early automotive industry.
The only difference is in the nature of the driver: The human driver is substituted by a high-performance computer. Nevertheless, in both cases, the driver is trained (i.e., is learning), and tested for road approval (i.e., to get the driver’s license) to ensure the driver’s adequate response in various traffic scenarios.
Figure 1: A satire postcard from 1908 on driver behavior (Source: National Museum of American History).
While these questions are solved for human drivers, the same cannot be said for self-driving vehicles. Especially as we go higher in the autonomy level (SAE level 3+), we are faced with more and more open points with regard to how we can guarantee the vehicle behavior on the road with high confidence.
Automotive players are left with the freedom to train their driving algorithms as they see fit as long as they pass the legislation bar (once it is available). The training process itself is no easy task. To reliably capture and understand the environment around the vehicle, i.e., environment perception, a vast amount of data must be recorded to allow for a complete surround view of the environment. This data is then used to train, weight, and test the used neural networks DNN, CNN, etc. How vast is this data lake you ask?
According to RAND’s estimates to ensure less fatalities than human drivers, i.e., < 0,68 fatalities per 100 million km, the autonomous vehicle has to drive 443 million kilometers without fatalities. Even after driving all these kilometers, the vehicle could theoretically pose a risk on the roads, as its confidence level is limited to 95%. Every additional 1% would require millions of extra kilometers driven.
Driving all these kilometers is practically impossible. This is where simulation comes into play: It helps reduce the size of the required test fleet and simulate critical traffic scenarios. Nevertheless, if we consider a minimum amount of 5% of kilometers driven in the real world using a vehicle with relatively simple sensor architecture of 4 - 6 radars, 1 - 5 lidars, 6 - 12 cameras, 8 - 16 ultrasonic sensors, and other motion, GNSS, and IMU sensors, the volume of recorded data could exceed 11 exabytes, as the 40 Gbit/s sensor data streams move around the world at an average speed of 40 kmh.
Figure 2: The data-driven development and validation approach.
11 exabytes for only 5% of real-world drives, an unimaginable amount of data - yet, these are only estimated efforts. They give us an understanding of the challenge, but they do not offer an answer. At most, they can answer the first question regarding driver training but not the second one regarding the certification, as a clear legislative bar still has to be found that dictates when an autonomously driving vehicle can be released on public roads.
The UN’s proposal for the approval of vehicles with automated lane keeping systems (LKAS) is a good start. It not only answers some of the legislative questions, it also sheds the light on the amount of simulation scenarios that the autonomously driving vehicles, which are more likely ADAS vehicles, must be tested to get approved. However, while it is a good step in the right direction, additional regulations must be put in place to move along the autonomy level and travel towards level 4 and 5.
Renowned car manufacturers also contribute to this autonomous driving (AD) safety quest. Safety First For Automated Driving (SFFAD) provides a comprehensive view on automated driving safety. All in all, these guidelines and regulations are a must, as they pave the way towards comprehensive legislative, design, and test approaches to increase confidence in AD safety.
A common aspect shared by all these safety approaches is emphasizing the necessity of using a multitude of test methodologies. A single testing approach is not enough. Through a combination of in-vehicle testing, closed-loop HIL testing, Data Replay (DR), and scalable SIL testing more confidence is ensured in the ADAS/AD system. I recommend to take a look on my colleagues’ articles on closed-loop HIL and data driven development to gather more pieces of the overall safety picture. All things considered, one test methodology is of special interest, as it merges two industries (IT & Automotive) to overcome its challenges and that is Data Replay. Therefore, we will focus on Data Replay in this blogpost.
Although it is not literally correct to use the term SIL and HIL for software and hardware Data Replay (i.e. reprocessing, re-simulation), as there is no feedback loop in this test methodology and it is a pure open loop test, we will continue to refer to them as SIL-, and HIL-DR, as the naming convention is not widespread, especially in the IT industry.
Figure 3: Applicability analysis of the test and validation methods according to safety first for automated driving.
This is a great question. What is the added value of doing an additional testing step? Are other tests not more than enough? The answer to these questions is clearer once we take a more thorough look at the different testing methods.
The requirement for Data Replay arises from the nature of the software components to be validated, i.e., environment perception and sensor fusion components. These components are hungry for data and for the most part they will be fed with synthetically generated data with high-fidelity simulators. Nevertheless, there is no workaround for using real-world data. While simulations can create critical traffic situations that are hard to reproduce in real life, the fidelity of the simulation environments is always questionable as comprehensive modeling of all real-world factors is hard to establish. Human eyes are easily deceived by adequate visuals, perception components are not. Any systematic deviation in the synthetic camera input could greatly downgrade the quality of the functions when they are deployed in the real-world.
Simulation scenarios are another inherently included challenge. For simulation, an engineer must be able to envision the traffic scenario and model it. As a result, the complexity of real-world scenarios is greatly reduced and subsequently neglected. For all these reasons, real world data is required and used to balance out these deficiencies and cover as much as possible of the different possible traffic scenarios within the operational design domain of the developed algorithm.
Simply said, a HIL DR station has the singular functionality of emulating the real world in the laboratory. Considering the previous example, where the vehicle’s sensors are generating 40 Gbit/s, the Data Replay HIL station must stream these 40 Gbit/s around the clock while making sure that the device under test (DUT) does not detect the fact that it is not installed in the vehicle or traveling on the road. The challenge lies in the nature of the streamed bits as each bit is part of a data stream and all of these streams are heterogeneous. Camera frames are differ from those of radar, lidar, and generally data from Some/IP packets. Nevertheless, all of the data must be synchronized to a milli- and microsecond accuracy to ensure the right testing conditions.
Figure 4: Data Replay overview.
Different variants exist within Data Replay testing, depending on customer requirements. Some of our customers want to test their security stack with Data Replay. To test these features, the HIL station must be capable to run in real-time on-the-fly adaptations of the recorded data with the relevant security keys. Other customers ask for sensor data manipulation and fault insertion in the data streams to generate a multitude of test cases based on a known number of original recordings. We at dSPACE offer Data Replay solutions to overcome all of these challenges by means of modular system architectures.
Figure 5: Data Replay workflow with an overlay of dSPACE products.
Figure 6: Different variants to connect DR HIL stations with customer’s data lakes.
While the requirements imposed on a single Data Replay HIL station are quite complex, as previously mentioned, it is a fact that a single station is not enough to fulfill ADAS/AD customer requirements. As time to market gains more importance, test campaigns must be performed within shorter timeframes.
The answer lies in the scalability of HIL stations and in bringing these scalable HIL stations near the measurement data. To shorten time to market and continuously validate the latest software version, thousands of kilometers must be replayed overnight. Clusters of HIL stations are combined to facilitate this. Maintaining a robust data pipeline between the HIL clusters and the data lake, regardless of whether the data lake is on-premise, or in a public or hybrid cloud, is of utmost importance for the efficiency and cost of the validation process.
Given the fact that customers have different tendencies, HIL stations and clusters must be flexibly integrable with all backbone network variations or, in other words, infrastructure agnostic HIL stations. Some customers are open to the idea of saving their data in a public cloud, others have second thoughts as they raise some questions around data security and conflict of interest. These latter customers tend to operate their own data centers whether fully on-premise or through colocation data center providers. The third and final option that customers evaluate is the hybrid cloud where they operate their own data centers up to certain storage and computational capacity to cover their average loads. Once this certain limit is exceeded, public cloud resources are utilized. This is especially beneficial in overload situations such as in final test campaigns directly prior to the production start.
We believe that the three approaches, full on-premise, full public cloud, and hybrid cloud, are here to stay and we support our customers’ choice by offering technical solutions to overcome data latency and efficiency challenges as well as a complete partner ecosystem with all IT big players enabling faster time to market. Nonetheless, certain factors affect choice. Connection latency and network bandwidth between the data lake and the HIL clusters dictates whether adding extra storage capacity is required. Each HIL station can support a certain data buffer. Yet, this buffer is not intended for adjusting to network fluctuations. Whether the data must be minimally buffered in the HIL station or additional storage capacity is required, dSPACE offers suitable architectures and solutions for each challenge.
Figure 7: Scalable Data Replay in the cloud.
Arguing for a safe autonomous vehicle is an ongoing challenge that requires a cooperative effort between the automotive industry and legislations. Data Replay is a at the center of the AD safety argumentation because it guarantees a high level of confidence in autonomous vehicles. However, Data Replay testing comes with challenges at the HIL station level and in the scalability of HIL clusters as well as efficient cloud-backbone connectivity. dSPACE helps you overcome these challenges easily and efficiently.
Drive innovation forward. Always on the pulse of technology development.
Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace & defense now.