AURELION: New Version 22.1

Sensor-Realistic Simulation at the Next Level

The new version 22.1 of AURELION, the dSPACE solution for sensor-realistic simulation for developing and validating perception algorithms and driving functions, again significantly increases the level of detail of simulations and visualizations for virtual test drives. This is demonstrated by new and enhanced lighting and weather effects, expanded truck/trailer model support, and even more U.S. traffic signs. AURELION now supports the testing and training of artificial intelligence (AI) for autonomous driving with semantic segmentation functionalities that enable pixel-level annotation.

Highly Realistic Lighting Conditions with Rain and Fog

The realism of simulations and visualizations for light and atmosphere, rain and fog has been significantly increased in AURELION 22.1.

It's Snowing ..

AURELION can do that, too: In the simulation, you can switch on snowfall and snow then interacts with different materials.

Extended Support for Trucks and Trailer/Semitrailer Models

To be able to test partially automated and highly automated trucks with trailers (incl. North American trucks) in virtual test drives, AURELION now offers extended support for such models. The new capabilities include:

  • Support for multibody systems with interactive sensor placement
  • New pre-built demos for ASM and AURELION, for example:
    • Truck with cabin suspension and semi-trailer
    • Truck with turntable trailer
  • Extensions of the AURELION 3-D library: Extensive collection of international tractors, trucks, trailers, and dolly trailers

Semantic Segmentation: Ground Truth at Pixel Level

The semantic segmentation of images included in AURELION provides pixel-level annotations to image regions. For artificial intelligence testing and training, especially for autonomous driving, this information can be used to help neural networks learn which patterns or textures belong to which semantic class of objects. Subsequently, such networks can be implemented for use cases as diverse as image generation, domain adaptation, lane detection, or drivable area detection.

Depth Image and Optical Flow

To display the depth image, the respective distances to the objects per pixel were encoded as grayscale.

While human understanding can perceive different depths and distances, the output of a camera sensor for a computer consists exclusively of color information in a two-dimensional space. To improve the understanding of the environment, depth images are calculated to assign the respective distance to each pixel of the image. Different approaches exist to calculate the depth image. The simulation with AURELION provides the depth information as ground truth.  

Another output mode for the camera sensor is optical flow. This enables the detection of motion at the pixel level. This makes it possible, for example, to detect and predict the movements of other road users. AURELION provides both the depth image and the optical flow as ground truth information. This information is ideal for testing and validating new algorithms, it can also be used as input for perception algorithms.

Extension of the US Traffic Signs Library

The selection of US traffic signs that can be used in the simulation has been expanded in AURELION 22.1.

Develop your driving functions with the help of AURELION sensor-realistic simulation – in hardware-in-the-loop (HIL) tests, software-in-the-loop (SIL) tests, and parallel validation in the cloud.

Informations produit

Faire avancer l'innovation. Toujours à la pointe de l'évolution technologique.

S’abonner à nos newsletters, gérer ses abonnements ou se désabonner

Enable form call

At this point, an input form from Click Dimensions is integrated. This enables us to process your newsletter subscription. The form is currently hidden due to your privacy settings for our website.

External input form

By activating the input form, you consent to personal data being transmitted to Click Dimensions within the EU, in the USA, Canada or Australia. More on this in our privacy policy.