Surmounting Steep Challenges in Training and Validating Automated Driving Functions

Veröffentlicht: 02.11.2020

Nicolas du Lac
CEO
Intempora – a dSPACE company

The technical challenges associated with advanced driver assistance systems (ADAS) and autonomous driving functions are steep. Engineering teams have to tackle an exorbitant number of development tasks with an extremely high level of confidence.

For an autonomous system to be fully functional, multiple features have to be integrated, such as automatic emergency braking, blind spot detection or lane keep assist, for example. The process of integrating highly complex, cross-functionality requires numerous software tasks to run in parallel within the vehicle, along with communication channels across multiple systems and subsystems.

To perceive its environment, different types of sensors (i.e. camera, radar, lidar) are positioned all around the autonomous vehicle, and these sensors have to be inter-connected to electronic control units (ECUs). A computationally-intensive electronic and computing infrastructure is required to collect and process all of the data that is generated by sensors. On top of this, highly sophisticated software has to run in parallel to ensure that tasks are carried out correctly and in a coordinated manner.

As applications become more and more complex, sensor set up also becomes more and more complex, making it necessary to test many different configurations. Additionally, multiple algorithms have to be integrated together, and the entire vehicle infrastructure has to demonstrate proper functionality across a wide spectrum of diverse scenarios.

This entire, networked infrastructure not only has to be developed, but it has to be thoroughly tested to validate that ADAS and AD functions perform optimally and in a safe manner.
 

AI Algorithms Need Proper Training to Carry Out Critical Tasks

One of the key components that makes ADAS and AD functions possible is artificial Intelligence (AI) algorithms. AI algorithms are used to carry out multiple tasks associated with perception, accurate positioning, path planning & decision making, and driver monitoring, and they have to be trained before they can be deployed in the field.

AI algorithms are usually trained using “supervised learning” methods. The algorithm model is fed with input data (i.e. sensor raw data from cameras, lidars, radar, etc., for perception and positioning models, or reconstructed environment description for path planning and decision-making models, etc.), along with reference data (also known as ground truth). The reference information teaches the algorithm to recognize and classify the input data (i.e. this is an approaching vehicle, this is a pedestrian, this is a lane marking, this is a traffic sign).

AI algorithms have to be trained to cope with an unlimited number of diverse traffic scenarios – from simple to complex, hazardous and even unpredictable situations. Training and validating safe ADAS /AD functions requires billions of testing miles, but it is simply impossible to predict every conceivable traffic scenario that could be encountered in the real world.

With multiple sensors on board, huge amounts of raw data are generated from autonomous vehicles – up to 50 Gigabits per second – with totals of tens or hundreds of PetaBytes (PB) per data acquisition campaign! This data has to be collected, stored and managed for post-processing. A big data infrastructure with appropriate network and bandwidth capability, such as  public or private clouds, are well suited for this purpose.

High Volumes of Sensor Data are Key for Training and Testing

With so much raw sensor data being collected, one may wonder if all of this data is needed to train an AI algorithm? In short … yes, it is. The multi-sensor playback of mass quantities of data is necessary to carry out AI algorithm training. The more data that is provided for AI algorithm training, the better that algorithm will perform. To train an algorithm for autonomous driving, Nvidia estimates that between 200 (conservative) and 600 (less conservative) PB of total raw data are necessary (source: https://devblogs.nvidia.com/training-self-driving-vehicles-challenge-scale/) .

Data fuels autonomous driving development. To train and validate perception and deep-learning (neural networks) algorithms, engineers need three essentials:

  • Large datasets: the amount of raw data volume needed to train AI for self-driving car is considerable.
  • Varied datasets: To train and enhance algorithms in different environments and complex situations.
  • Real dataset: Simulated data can help and be useful to reproduce complex and dangerous situations rapidly, but it’s not sufficient to cover  the unpredictability of the real world.

Additionally, high quantities of raw sensor data are necessary to test and validate algorithms under test. To validate an algorithm in a test, it has to be fed raw sensor data in playback or simulation mode. In parallel to feeding the algorithm raw sensor data, ground truth labels can be played back in a synchronized manner. Though the algorithm will not use the reference data (or “ground truth”), the results can be compared to check whether the algorithm is performing well in all situations.

When a system under test is operating at an SAE 3, 4 or 5 level of autonomy, where responsibility is transferred from the driver to the system, the system has to perform perfectly.

This graphic depicts a typical workflow for executing an ADAS/AD algorithm test using either real data management or simulated data management.

A Dual Approach to Testing

So what is the best approach for training AI algorithms when you have to deal with so many scenarios and so much data? A dual approach may offer the best of both worlds.

One possibility is to use simulators for software-in-the-loop (SIL) and hardware-in-the-loop (HIL) testing. The AI algorithm under test is fed with synthetic data generated by the simulators (i.e. environment simulation, sensors simulation, vehicle dynamics, etc.). This allows you to quickly accumulate virtual driving miles to test many situations under precise control. Other advantages of using simulators is that they provide a means to test dangerous situations in the safety of the lab, and they are big time savers in completing lengthy tests.

However, while simulators offer many benefits, they are not sufficient – alone – to train algorithms and ultimately develop and test ADAS/AD functions. Simulations are not always perfect in terms of realism of the sensor data and reality’s imagination always goes beyond what can be expected and modelled in simulators. Therefore, the virtual world needs to be bridged with the real world. This can be achieved by supplementing simulated scenarios with real test drives. Only real test drives can result in real scenarios that deliver real sensor data. This data needs to be brought into the simulation process to ensure a proper level of realism.

This graphic shows the same workflow with an overlay of solutions from dSPACE, Intempora (a dSPACE Company), and UAI (a dSPACE Company) for each stage of the ADAS/AD development and testing process.

One Cohesive Tool Chain and Workflow Makes Life Easier

When dealing with multiple data sources, it’s easier to have the same tooling and workflows. Whether you are training or testing AI algorithms in simulation mode, data replay mode, or in real time during real test drives on the road, utilizing one cohesive tool chain can make the testing process easier. The tool chain should:

  • Easily interface with any kind of sensor
  • Be capable of executing any kind of algorithm
  • Enable the sharing of data between teams (duplicating data on sites is too cost prohibitive)
  • Promote test efficiency (i.e. reduce the overall number of simulated test drives or real test drives, while ensuring good scenario coverage)
  • Incorporate a high network bandwidth to transfer algorithms to a central server (i.e. Cloud)
  • Offer an optimal computing infrastructure to avoid bottlenecks

dSPACE offers a seamless workflow and an end-to-end tool chain to meet all of your ADAS / AD development and testing needs. This solution is intuitive and unique, but still modular by design and open to certain third-party tools (i.e. data formats, middleware, post-processing tools, etc.). For more information, click here.

NOTE: Intempora has been providing advanced software solutions for real-time applications for the past 20 years. In July of 2020, after a long partnership, Intempora officially joined dSPACE.  Read press release.

Treiben Sie Innovationen voran. Immer am Puls der Technologieentwicklung.

Abonnieren Sie unser Expertenwissen. Lernen Sie von erfolgreichen Projektbeispielen. Bleiben Sie auf dem neuesten Stand der Simulation und Validierung. Jetzt dSPACE direct und dSPACE direct aeropace & defense abonnieren.

Formularaufruf freigeben

An dieser Stelle ist ein Eingabeformular von Click Dimensions eingebunden. Dieses ermöglicht es uns Ihr Newsletter-Abonnement zu verarbeiten. Aktuell ist das Formular ausgeblendet aufgrund Ihrer Privatsphäre-Einstellung für unsere Website.

Externes Eingabeformular

Mit dem Aktivieren des Eingabeformulars erklären Sie sich damit einverstanden, dass personenbezogene Daten an Click Dimensions innerhalb der EU, in den USA, Kanada oder Australien übermittelt werden. Mehr dazu in unserer Datenschutzbestimmung.