Developing and Testing Driver/In-Cabin Monitoring Systems (DMS/IMS)

  • Many commercial off-the-shelf (COTS) to connect common camera types
  • Integration of DMS-specific camera sensors like NIR (Near InfraRed)
  • Start developing your applications now to meet DMS requirements for NCAP (2023) and new approval of trucks and buses as of 2024
  • Easily include Python-based AI algorithms for more robust solutions
  • Bring your DMS system to the market faster using one framework designed to develop and test multi-modal sensor applications

Task

Driver status monitoring (DSM) will be an integral part of the European New Car Assessment Programme (Euro NCAP) for passenger cars as of 2023. The DSM systems will be expected to detect driver distraction, fatigue, and unresponsiveness. From 2024 on, driver drowsiness and attention warning will be required for the approval of new trucks and buses, as will be the EU Advanced Driver Distraction Warning for type approval. For automated driving level 3 and higher, the importance of driver monitoring use cases will increase, with even more cameras and 60 GHz radar sensors installed in the cabin. 

Challenge

To detect the driver status, a DSM application can evaluate the output of different vehicle functions, such as lane keeping, steering movement analysis, or turn signal activation. However, the most valued data is provided by in-cabin cameras placed in the front of vehicle that observe the driver. To overcome adverse lighting conditions, specific NIR or NIR+NGR camera types are often used. A key issue is to address the robustness of the detecting application. Various conditions have to be considered when testing the quality of detections. For example, age and sex of the driver, eyewear, facial hair and occlusion, as well as the driver’s behavior such as eating, talking, or laughing. A promising approach to overcoming these difficulties is the utilization of deep learning algorithms and specific training data. 

Solution

The multisensor development framework RTMaps offers a block-based approach that lets you easily integrate in-cabin sensors like camera or radar along with relevant vehicle buses in your setup. You can drag a wide range of sensor types and bus components from the COTS library, connect them with your DMS application, and execute them with just a few clicks. You can request or even integrate any sensors that are not in the library by using a documented API and implementation examples. RTMaps provides unique built-in capabilities that expose the resulting asynchronous input data streams of different types to your DMS algorithm in a time-correlated manner. This enables data fusion as a key prerequisite for efficient driver status detection. The development framework natively supports the Python scripting language, which is widely used for the development of AI algorithms and lets you quickly integrate deep learning functions to meet the challenging requirements on highly robust driver monitoring functions. A high number of features included in RTMaps lets you further perform your validation tasks. For example, you can use data replay to feed an AI-driven DMS function with a host of real or realistic synthetic data to test its robustness, and all this using just one tool. 

Drive innovation forward. Always on the pulse of technology development.

Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace & defense now.

Enable form call

At this point, an input form from Click Dimensions is integrated. This enables us to process your newsletter subscription. The form is currently hidden due to your privacy settings for our website.

External input form

By activating the input form, you consent to personal data being transmitted to Click Dimensions within the EU, in the USA, Canada or Australia. More on this in our privacy policy.