Prototyping Complex Functions for Automated Driving in Real Traffic

  • Compact and robust prototyping system for in-vehicle use
  • MicroAutoBox Embedded SPU with various interfaces to sensors (camera, radar, lidar, etc.) for 360° redundant sensing. Up to six raw data cameras can be connected.
  • Six-core ARM®v8 CPU and an integrated NVIDIA® GPU with 256 cores for high-performance sensor data processing and deep learning (artificial intelligence)
  • Multi-GNSS receiver with integrated inertia measurement unit and Dead Reckoning. Optional Cat. 6 LTE modul
  • MicroAutoBox II with real-time processing unit and safety monitoring mechanisms, such as a multistage watchdog, challenge-response, and memory monitoring
  • Multiple cascaded MicroAutoBox Embedded SPUs to easily extend data processing capabilities and increase the number of sensor interfaces.

Reliable environment recognition, precise localization, and a well-coordinated vehicle-driver interaction are essential prerequisites for highly automated and autonomous driving. The large volume of sensor and communication data are often gathered in a central control unit. Here, they are preprocessed and classified using complex algorithms, e.g., from the field of artificial intelligence, and then merged into a unified environment model that is used in subesequent steps to determine the driving trajectory as well as vehicle longitudinal and lateral control in real time. The combination of MicroAutoBox II and Embedded SPU provides more than sufficient resources for this. With various sensor and communication interfaces, a high-performance CPU/GPU combination for perception and fusion algorithms as well as a real-time unit for vehicle control and function monitoring, it is ideally suited for rapid prototyping of autonomous driving functions in the vehicle.


Subscribe newsletter

Subscribe to our newsletters, or manage or delete your subscriptions