Published: February 15, 2021
Dr. Patrik Morávek, Product Engineer Automated Driving & Software Solutions, dSPACE GmbH
Data logging in the automotive industry has been transitioning to a new form due to challenges arising from new E/E architectures, including ADAS/AD functions. In this and following blog post, we will give you some tips and recommendations on how you can optimize data logging and make it more efficient by using the right tools and methods. Reducing redundancy is a hot topic and promises optimization with high gains, and therefore, we will start with it today.
In recent years, neural network-based algorithms have become an inherent part of functions for safety and comfort in vehicles. This is why software development in the automotive industry shifted from model-based development to data-driven development. The development and validation of neural network-based functions require a combination of virtual testing (to cover the broad variability) and real-world testing (for a reality check).
Creating a well-balanced dataset from real situations that might be encountered on the roads is crucial to ensure safe driving. The challenge is to determine the correctly balanced dataset and this dataset is suitable for development and validation. This is a question that nobody can answer with certainty yet and the only viable strategy is to continuously extend, tune, and refine the data. The fact that nobody can say which data is sufficient and still economical requires a collection of extremely large data volumes in the first place. “If I don´t know what I really need, I had better collect everything and choose when I know better.”, this has been a philosophy of most companies in the industry.
However, we are no longer at the beginning pursuing of autonomy and have already gathered a lot of experience and knowledge across the industry. Recording and storing data continuously during the entire drive of a test vehicle is no longer required. Experience tells us that there are specific aspects that can decrease the amount of data stored in a vehicle and in a data center, and thus save significant costs. Incorrect or incomplete data can be deleted straight away. This data will not be used. Furthermore, for building a balanced data set the variability of data adds much more value than pure volume. For example, it is obvious that sequential recording of static situations (e.g., standing at the traffic lights, highway driving) will not add any value for training models or validating a function.
Figure 1: Marginal improvement of neural network-based function with growing training dataset
A certain level of AD function performance can be achieved with a relatively small data set. However, the continuous improvement of performance and situation coverage requires ever more specific data from infrequently occurring situations. This means that increasingly less data will provide true benefit in the progress (Figure 1). During this phase of the development, it makes even less sense to record all data continuously. Triggering and filtering the data to be recorded and stored becomes the main efficiency factor. The recording might be triggered manually, based on the automatic situation analysis or as a result of uncertainty regarding the function under test. The data logger must be still actively receiving the data from all sensors and interfaces but the act of storing the data to a permanent memory is executed only after a trigger occurs.
Nowadays, the data logging in ADAS/AD is about reduction of unnecessary data. Most importantly, data reduction decreases the costs for the data storage per vehicle or operational costs (the storage does not have to be changed often). The data ingestion process also benefits from the reduced data as less data must be transferred to the data center. It saves transmission time and allows for getting more valuable data to the final users in less time. Finally, data processing and selection (both automatic and manual) take less time if only relevant data is available. All in all, data reduction shortens the data cycle and provides the data to the end users more quickly, which in turn accelerates the development progress.
The greatest contributor to the data volume is cameras. Camera data is often collected at high frame rates (automotive cameras support of up to 60 fps) to capture highly dynamic scenes and allow for more precise tracking. However, the drawback is that high frame rates can lead to high redundancy, especially when the neighboring sequential frames represent equal content. This happens very frequently if the frame rate is high but the scenery changes only slowly (no traffic, low ego vehicle speed). There are even situations where frame rate as low as 1 fps is sufficient and therefore over 90% of the frames recorded are redundant, e.g., standing at a red traffic light or following a truck on a highway.
Figure 2 illustrates two different image samples from the A2D2 dataset. Both show the same semantics. Keeping both samples will not give any benefits for neural network model training.
To a certain extent, the high redundancy of the collected data can be decreased by equidistant resampling of the data samples, e.g., only every second frame is recorded. This is often done but it is rather a simple approach and not adaptive enough to the immediate situation around the car.
To improve the ratio between informative and less informative samples, the redundant frames are either not stored or simply tagged during data collection by a specific AI-based algorithm. The AI algorithm focuses on content-based image features rather than labels. Hence, it can be applied before any labeling in a fully automatic manner. Unlike other machine learning models, which depend on expensively labelled data, the redundancy reduction algorithm is trained in an unsupervised manner. Thus, the model does not suffer from any domain-gap.
Reducing less informative images in a sequence brings many advantages. On the one hand, users can save storage space as well as computational costs for further preprocessing steps and labeling costs. On the other hand, the redundancy reduction enriches the quality of training data for neural networks, since neural network-based algorithms tend to overfitting on data that is too similar. Therefore, reducing redundancy can lead to more balanced and diverse training data sets with higher quality.
Figure 3: AUTERA and RTMaps diagram with the redundancy block is illustrated.
Data reduction can be applied directly in the vehicle as well as offline as a preprocessing and data refinement. In-vehicle, AUTERA data logger and RTMaps logging software (Fig. 3) are the most suitable tools for implementing the online redundancy reduction. They offer sufficient power, synchronized handling of the data streams, and a simple flexible interface for the configuration. With AUTERA and RTMaps, the data can be eliminated from the recording or tagged as redundant to be handled separately during the processing in the data pipeline (e.g., stored in an archive storage).
The implementation of the algorithm was tested on a highway where the benefits are the most impactful. The video shows a possible visualization of the algorithm output. In this example, the data is reduced and only the non-redundant data is stored. During the sequence, the scene changes slightly, and therefore the dropout rate ranges around 80% as depicted by the gauge. The exact number of retained and removed frames is also seen at the bottom left.
Evidently, the redundancy can be defined in different ways. Therefore, comparing the similarity and the threshold for decision-making can be freely configured. This gives a user full control and can define how similar the data must be to be considered redundant and not useful. The threshold setting effect on the video sequence can be seen for illustration purposes in the video below. The different settings result in keeping all frames, dropping out approx. 75% and dropping out approx. 97% of frames of the particular scene. It is obvious that despite the high data reduction, the sequence still contains the most informative frames and scene content.
Content-based redundancy reduction is one of the advanced methods to efficiently reduce data volume. Due to the efficient algorithms, only intermediate processing power is required, and therefore, it can be directly applied in the vehicle. In addition, despite the fact that the method is AI-based it does not require large and expensive data sets for training. Because the method can be applied in a data logger, meaning at the beginning of the data pipeline, the cost can be reduced drastically, including those for in-vehicle storage.
In this blog post, we have described the basic principles and provided some examples that show which reduction ratio can be achieved without losing relevant information. In the following blog posts, we will go into more detail on the effect of redundancy reduction in AI training and present other methods that advance data logging, such as situation-based triggering and filtering.
Subscribe to our newsletters, or manage or delete your subscriptions