The new version 22.1 of AURELION, the dSPACE solution for sensor-realistic simulation for developing and validating perception algorithms and driving functions, again significantly increases the level of detail of simulations and visualizations for virtual test drives. This is demonstrated by new and enhanced lighting and weather effects, expanded truck/trailer model support, and even more U.S. traffic signs. AURELION now supports the testing and training of artificial intelligence (AI) for autonomous driving with semantic segmentation functionalities that enable pixel-level annotation.
The realism of simulations and visualizations for light and atmosphere, rain and fog has been significantly increased in AURELION 22.1.
AURELION can do that, too: In the simulation, you can switch on snowfall and snow then interacts with different materials.
To be able to test partially automated and highly automated trucks with trailers (incl. North American trucks) in virtual test drives, AURELION now offers extended support for such models. The new capabilities include:
The semantic segmentation of images included in AURELION provides pixel-level annotations to image regions. For artificial intelligence testing and training, especially for autonomous driving, this information can be used to help neural networks learn which patterns or textures belong to which semantic class of objects. Subsequently, such networks can be implemented for use cases as diverse as image generation, domain adaptation, lane detection, or drivable area detection.
To display the depth image, the respective distances to the objects per pixel were encoded as grayscale.
While human understanding can perceive different depths and distances, the output of a camera sensor for a computer consists exclusively of color information in a two-dimensional space. To improve the understanding of the environment, depth images are calculated to assign the respective distance to each pixel of the image. Different approaches exist to calculate the depth image. The simulation with AURELION provides the depth information as ground truth.
Another output mode for the camera sensor is optical flow. This enables the detection of motion at the pixel level. This makes it possible, for example, to detect and predict the movements of other road users. AURELION provides both the depth image and the optical flow as ground truth information. This information is ideal for testing and validating new algorithms, it can also be used as input for perception algorithms.
The selection of US traffic signs that can be used in the simulation has been expanded in AURELION 22.1.
Develop your driving functions with the help of AURELION sensor-realistic simulation – in hardware-in-the-loop (HIL) tests, software-in-the-loop (SIL) tests, and parallel validation in the cloud.
Drive innovation forward. Always on the pulse of technology development.
Subscribe to our expert knowledge. Learn from our successful project examples. Keep up to date on simulation and validation. Subscribe to/manage dSPACE direct and aerospace & defense now.